Background story:
I have a web portal in .NET 3.5 on an IIS 6 web server. Currently there is a page that is given a value and based on that value looks up a PDF file on a web service and displays the results to the user in another tab in the web page. This is done with the following code.
context.Response.ClearContent();
context.Response.ClearHeaders();
context.Response.Clear();
context.Response.AddHeader("Accept-Header", pdfStream.Length.ToString());
context.Response.ContentType = "application/pdf";
context.Response.BinaryWrite(pdfStream.ToArray());
context.Response.Flush();
This works and has worked for years. However we got an issue from the client that a particular client was having the PDF returned as the same PDF every time until they cleared temp internet cache.
I thought oh cool, this is an easy one. I will just add the cache headers to the response to never cache it. So I added the following:
context.Response.Cache.SetCacheability(HttpCacheability.NoCache);//IE set to not cache
context.Response.Cache.SetNoStore();//Firefox/Chrome not to cache
context.Response.Cache.SetExpires(DateTime.UtcNow); //for safe measure expire it immediately
After a quick test I got exactly what I was expecting in the response header.
Cache-Control no-cache, no-store
Pragma no-cache
Expires -1
The Problem:
So this went live. Everything seemed cool day one. The day after, bam, everyone started getting white screens and no PDF displayed. After further investigation, I found out it was only IE 6,7,8. Chrome is fine, Firefox fine, safari fine, even IE 9 fine. Without knowing the why this happened, I reverted my change and deployed it, and everything started worked again.
I have searched all over trying to find out why my caching headers seemed to confuse IE 6-8 to no avail. Has anyone experienced this type of issue with IE 6-8? Is there something I am missing? Thanks for any insight.
I found the solution. Here is what tipped me off. Here is a link
Basically IE8 (and lower) was having issues with the Cache-Control header if it had no-cache or store-cache. I was able to work around the problem by basically allowing private caching only and set a max age to very short so it expires almost immediately.
//Ie 8 and lower have an issue with the "Cache-Control no-cache" and "Cache-Control store-cache" headers.
//The work around is allowing private caching only but immediately expire it.
if ((Request.Browser.Browser.ToLower() == "ie") && (Request.Browser.MajorVersion < 9))
{
context.Response.Cache.SetCacheability(HttpCacheability.Private);
context.Response.Cache.SetMaxAge(TimeSpan.FromMilliseconds(1));
}
else
{
context.Response.Cache.SetCacheability(HttpCacheability.NoCache);//IE set to not cache
context.Response.Cache.SetNoStore();//Firefox/Chrome not to cache
context.Response.Cache.SetExpires(DateTime.UtcNow); //for safe measure expire it immediately
}
Related
We have an established site that is now being effected by CSP rules. I’ve added all the scripts we need to the Content-Security-Policy header.
When visiting the site using private browsing or a device that hasn’t been to the site before, I get the new CSP header and everything works.
However, users that have been to the site before get the old headers, and they get CSP warning.
NB I cannot use expire 0 or similar as the browsers are not looking for the new headers, so never know that the headers have expired.
I’m looking for a way to tell the browser “hey, you should checkout my cool new headers because they’re new”.
Turns out I was being foiled by Local Storage that was overwriting the CSP header. Even clearing the cache doesn’t solve the problem as Local Storage remains.
Hope this helps somebody else!
Everything in italics is the original post, edits below are non-italicized
I am writing in C# using ASP.NET 4.0.
I am authenticating user credentials via SQL lookup and if valid I am storing the username in a session variable then redirecting the user back to the main page. Pretty simple.
if (!db.isValidLogin(userName, passWord))
{
//invalid login, show it!
//just some code to tell the user invalid credentials
}
else
{
//show login successful!
//update some items on the screen
Session["username"] = userName.ToUpper();
Response.Redirect("/");
}
This is not yet over SSL as it's internal development at this point.
When I use Chrome Version "25.0.1364.172 m" I am properly redirected and I am "logged in". My screen is representative of that by showing me my user name and allowing me access to features that authentication allows.
When I use (32-bit) IE 9 Version "9.0.8112.16421" with the same server side code and procedure... When I do the redirect my session variable "username" is gone. In fact the session has a count of 0 for items. BEFORE the redirect the session variable is set and it is correct.
I have the same results on a Windows Server 2008 R2 64-bit box and a Windows 7 64-bit box.
I am using a single server hosting both IIS and SQL. I am not using a session server.
I have traced it out... the code is running exactly as desired up until the redirect. Receiving credentials, executing my stored procedure to validate... setting the session variable before redirecting (I can see the session and the variable and the value is correct).. and then redirecting... and as stated, with Chrome it works EXACTLY as desired... with IE the session is lost on redirect.
I have tried this as well with no success:
Response.Redirect("/", false);
So I'm convinced that something IE is doing, maybe with setting cookies on the client, that is causing a mismatch between the browser and the server session.
Should I not be doing a response.redirect??? And if I do a response.redirect, how do I keep the session from resetting? Once again, keep in mind this doesn't happen when I use Chrome.
Frustrating...
Thanks for any help!
NEW INFO
After attempting to turn off IE caching per an answer... I decided to output the sessionID to the browser so I could see what it was.
The behavior is more direct that the login and redirect...
In IE simply refreshing the browser with F5 causes a new session to be created on the server. Each refresh I receive a NEW session ID.
Testing this with Chrome I do not get a new session ID unless I call session.abandon, timing out my session or closing and restarting the browser.
I was only calling session.abandon when the user clicked log out, but have commented out that code (just in case) to ensure that I'm not abandoning it on accident.
Somewhere between actual page refreshes IE is presenting itself to the server for a new session... ARGH.
For example:
Chrome:
Before login: myjuzrmccerk1t4eakcliq14
After login: myjuzrmccerk1t4eakcliq14
IE:
Before login: unyebuc2ikac12xnhpssy0em
After login: unyebuc2ikac12xnhpssy0em
Refreshes with F5 or Ctrl-R:
one: ptjt42fjwzgdreyyyo3cmvrs
two: s1hd5aatl5yexeuc125aqhst
three: kbpflurcdcxubux3scmdm4k5
Update 2
I have changed the site to use "State Server" for the session and started the appropriate service... There is no change in behavior.
ANSWER
Since my rep is low.. .this won't let me answer my own question for another 3 hours... but here it is..
I found a fix... through trial and error.
InProc and StateServer in sessionstate both had the same results until I added "cookieless=true"
<sessionState mode="StateServer" cookieless="true" />
This causes the session state to be consistent in both Chrome and in IE (where the problems was) and my session ID no longer changes between page refreshes. I was unable to determine WHY this happens, but it is fixed nonetheless.. Thanks Mike and antinescience for your help!
InProc and StateServer in sessionstate both had the same results until I added "cookieless=true"
This causes the session state to be consistent in both Chrome and in IE (where the problems was) and my session ID no longer changes between page refreshes. I was unable to determine WHY this happens, but it is fixed nonetheless.. Thanks Mike and antinescience for your help!
There are some other reports that indicate that IE's caching mechanism (which is widely regarded as, well, not great) may be to blame here. Can you try appending the following to your page:
// Stop Caching in IE
Response.Cache.SetCacheability(System.Web.HttpCacheability.NoCache);
// Stop Caching in Firefox
Response.Cache.SetNoStore();
...and see if that has any effect? The other alternative is you could do:
int randomNumber = new Random().Next(1, 1000);
Response.Redirect("/?nocache=" + randomNumber);
...just for testing. Heck, you could slap the date as numeric in to test as well.
i had the same problem for couple of days now and finally i knew the reason why the session was changed each refresh, first after using the Response.Redirect( URL ,false) method i realized that i was entering the URL as AbspoluteURI as "http:// ServerIP/File/Page.aspx" , i used the AbsolutePath method instead as "~/File/Page.aspx", and my problem was solved!! the IE thinks that the server was changed when you write AbsoluteURL instead of AbsolutePath, i wish this could help
I had the same problem with a webpage which was hosted inside an IFrame. Troubleshooting showed that the ASP.NET Session cookie was lost along the way, and it only happened when using Internet Explorer. When I opened up my webpage in a separate tab in IE everything worked fine.
The problem was caused by security in Internet Explorer. It will not persist cookies unless there is a P3P HTTP header. You can see the blocked URLs by going to IE->View->Webpage privacy report..., and there choose to show "Restricted websites".
I solved the problem by adding a dummy P3P header with every request. The header looks like this;
P3P:"Bogus P3P header because Internet Explorer requires one"
This is the same approach as facebook.com uses. Their p3p header looks like this;
p3p:CP="Facebook does not have a P3P policy. Learn why here: http://(...)/p3p"
See also Cookie blocked/not saved in IFRAME in Internet Explorer
I had this issue too, this SO response solved my problem. If your hostname has underscores (which seems to be invalid), IE seems to drop the session (!).
I have a web application that contains a few hundred small images, and is performing quite badly on load.
To combat this, I would like to cache static files in the browser.
Using a servlet filter on Tomcat 7, I now set the expires header correctly on static files, and can see that this is returned to Chrome:
Accept-Ranges:bytes
Cache-Control:max-age=3600
Content-Length:40284
Content-Type:text/css
Date:Sat, 14 Apr 2012 09:37:04 GMT
ETag:W/"40284-1333964814000"
**Expires:Sat, 14 Apr 2012 10:37:05 GMT**
Last-Modified:Mon, 09 Apr 2012 09:46:54 GMT
Server:Apache-Coyote/1.1
However, I notice that Chrome is still doing a round trip to the server for each static resource on reloads, sending an if-modified header and getting a correct 304 Not Modified response from Tomcat.
Is there any way to make Chrome avoid these 100+ requests to the server until the expiry has genuinely passed?
There are 3 ways of loading a page -
Putting the url in the address bar and pressing enter which is equivalent to navigating from a hyper link (Default browsing behaviour). This will honour the Expires headers and will first check the cache of the static content to be valid and then if the Expires header time is in future it will load it directly from the cache. In this case the browser will make no request at all to the server. In case the cached content is in-valid it will make a request to the server.
Pressing f5 to refresh the page. This would basically send a if-modified header to the server and verify if the content has changed. If it has changed you would get a 200 response else if not then a 304 response. In both cases the image is not loaded on the page until a response is received from the server.
Pressing ctrl+f5 which would forcefully clear all the cache and reload all the images. It will not spend time in verifying if the images have changed or not using the headers.
I guess the behaviour you are expecting is the first kind. The only thing that you should be looking at is the way you are loading the page. Normally people are not going to press f5 or ctrl+f5 thus your static content will not be re-validated every time. It will forcefully clear the cache and reload every static item on the page.
In short just remember - reload the page by pressing enter in the address bar instead. The browser will honour the headers that you have provided. This is not specific to chrome, its a W3C standard.
Be carefull when you are testing. I noticed that in Chrome version 20 if I hit F5 to reload the page then in the network panel I see new requests.
Hoewer if I place the cursor to the title bar, after the current page url, and hit enter, I get resources from cache, whitch header was set to cache.
Also a good reading:
http://betterexplained.com/articles/how-to-optimize-your-site-with-http-caching/
Assuming you have ruled out the various gotchas that have already been suggested, I found that Google Chrome can ignore the Cache-Control directive unless it includes public, and that it has to be first. For example:
Cache-Control: public, max-age=3600
In my experiments I also removed ETags from the server response, so that could be a factor, but I didn't go back and check.
I send back an image with the following HTTP response header:
Cache-Control: private,max-age=86400
My understanding is that the browser should not even ask for this file for 24 hours (86,400 = 60s * 60m * 24h).
What I'm seeing on subsequent requests is that it still asks for the file, but gets back a "304 Not Modified." This is good, but I want to remove even that request/response.
What header is required to prevent the browser from even bothering to ask for the file, and just have it blindly use the file it has in local cache, until that file expires?
It all really depends on how you're testing this. On Firefox 3.6 and IE8, clicking on a link and then on a link that move you back to the first page will use the cache correctly with max-age. Hitting the Return key again in the URL field will show the same behavior.
However, hitting F5 will ask again for the file but allows 304 responses.
Hitting Ctrl+F5 will always ask again for the file, with Cache-Control and Pragma set to no-cache, forcing a 200 response.
This simply can't be done reliably in HTML < 5.
You could use client side storage in HTML5 or use a browser extension such as Gears to provide this functionality.
I am streaming a PDF to the browser in ASP.NET 2.0. This works in all browsers over HTTP and all browsers except IE over HTTPS. As far as I know, this used to work (over the past 5 years or so) in all versions of IE, but our clients have only recently started to report issues. I suspect the Do not save encrypted pages to disk security option used to be disabled by default and at some point became enabled by default (Internet Options -> Advanced -> Security). Turning this option off helps, as a work-around, but is not viable as a long term solution.
The error message I am receiving is:
Internet Explorer cannot download OutputReport.aspx from www.sitename.com.
Internet Explorer was not able to open this Internet site. The requested site is either unavailable or cannot be found. Please try again later.
The tool used to create the PDF is ActiveReports from DataDynamics. Once the PDF is created, here is the code to send it down:
Response.ClearContent()
Response.ClearHeaders()
Response.AddHeader("cache-control", "max-age=1")
Response.ContentType = "application/pdf"
Response.AddHeader("content-disposition", "attachment; filename=statement.pdf")
Response.AddHeader("content-length", mem_stream.Length.ToString)
Response.BinaryWrite(mem_stream.ToArray())
Response.Flush()
Response.End()
Note: If I don't explicitly specify cache-control then .NET sends no-cache on my behalf, so I have tried setting cache-control to: private or public or maxage=#, but none of those seem to work.
Here is the twist: when I run Fiddler to inspect the response headers, everything works fine. The headers that I receive are:
HTTP/1.1 200 OK
Cache-Control: max-age=1
Date: Wed, 29 Jul 2009 17:57:58 GMT
Content-Type: application/pdf
Server: Microsoft-IIS/6.0
MicrosoftOfficeWebServer: 5.0_Pub
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
content-disposition: attachment; filename=statement.pdf
Content-Encoding: gzip
Vary: Accept-Encoding
Transfer-Encoding: chunked
As soon as I turn Fiddler off and try again, it fails again. One other thing that I noticed is that when Fiddler is running I get a There is a problem with this website's security certificate warning message, and I have to click Continue to this website (not recommended) to get through. When Fiddler is off, I do not encounter this security warning and it fails right away.
I am curious what is happening between Fiddler and the browser so that it works when Fiddler is running but breaks when it's not, but more importantly, does anyone have any ideas how I could change my code so streaming PDFs to IE will work without making changes to the client machine?
Update: The Fiddler issues are resolved, thank you very much EricLaw, so now it behaves consistently (broken, with or without Fiddler running).
Based on Google searching, there seem to be plenty of reports of this same issue all over the web, each with it's own specific combination of response headers that seem to fix the problem for their individual cases. I've tried many of these suggestions, including adding an ETag, LastModified date, removing the Vary header (using Fiddler) and dozens of combinations of the Cache-Control and/or Pragma headers. I tried "Content-Transfer-Encoding: binary" as well as "application/force-download" for the ContentType. Nothing has helped so far. There are a few Microsoft KB articles, all of which indicate that Cache-Control: no-cache is the culprit. Any other ideas?
Update: By the way, for completeness, this same issue occurs with Excel and Word outputs as well.
Update: No progress has been made. I emailed the .SAZ file from Fiddler to EricLaw and he was able to reproduce the problem when debugging IE, but there are no solutions yet. Bounty is going to expire...
Your Cache-Control header is incorrect. It should be Cache-Control: max-age=1 with the dash in the middle. Try fixing that first to see if it makes a difference.
Typically, I would say that the most likely culprit is your Vary header, as such headers often cause problems with caching in IE: http://blogs.msdn.com/ieinternals/archive/2009/06/17/9769915.aspx. You might want to try adding a ETAG to the response headers.
Fiddler should have no impact on cacheability (unless you've written rules), and it sounds like you're saying that it does, which suggests that perhaps there's a timing problem of some sort.
>Do not save encrypted pages to disk security option used to be disabled by default
This option is still disabled by default (in IE6, 7, and 8), although IT Administrators can turn it on via Group Policy, and some major companies do so.
Incidentally, the reason you see the certificate error while running Fiddler is that you haven't elected to trust the Fiddler root certificate; see http://www.fiddler2.com/fiddler/help/httpsdecryption.asp for more on this topic.
After two weeks on a wild goose chase, I have not been able to find any combination of code changes that will allow this method of streaming PDF, Excel or Word documents when the 'Do not save encrypted pages to disk' option is turned on.
Microsoft has said this behavior is by design in a number of KB articles and private emails. It appears that when the 'Do not save encrypted pages to disk' option is turned on that IE is behaving correctly and doing what it is told to do. This post is the best resource I have found so far that explains why this setting would be enabled and the Pros and Cons of enabling it:
"The 'Do not save encrypted pages to disk' comes into play when dealing with SSL (HTTPS) connections. Just like a web server can send done information about how to cache a file one can basically set Internet Explorer up to not save files to the cache during an SSL (HTTPS) connection regardless if the web server advises you can.
What is the upside for turning this feature on, security is the number one reason why the feature is turned on. Pages are not stored in the Temporary Internet Files cache.
What is the downside? Slow performance, since nothing is saved to the cache even that 1 byte gif image used a dozen times on the page must be fetched from the webserver each time. To make matters worse some user actions may fail such as downloaded files will be deleted and an error presented or opening PDF documents will fail to name a few scenarios."
The best solution we can find at this point is to communicate to our clients and users that alternatives exist to using this setting:
"Use 'Empty Temporary Internet Files folder when browser is closed'. Each time the browser closes all files will be purged from the cache assuming there is not a lock on a file from another instance of the browser or some external application.
A lot of consideration needs to be given before utilizing 'Do not save encrypted pages to disk'. Sounds like a great security feature and it is but the results of using this feature may cause your help desk calls to go up for download failures or slow performance."
I found that this seemed to work for me:
Dim browser As System.Web.HttpBrowserCapabilities = Request.Browser
If (browser.Browser = "IE") Then
Response.AppendHeader("cache-control", "private") ' ie only
Else
Response.AppendHeader("cache-control", "no-cache") ' all others (FF/Chrome tested)
End If
I had a similar problem with PDF files I wanted to stream. Even with Response.ClearHeaders() I saw Pragma and Cache-Control headers added at runtime. The solution was to clear the headers in IIS (Right-click -> Properties on the page loading the PDF, then "Http headers" tab).
SOLVED: This is an IE problem, not from aplication...
fix it with this: http://support.microsoft.com/kb/323308
It work perfect for me, after try for a long time.
ATT: Mr.Dark
We have faced a similar problem long time back - what we did was we (this is Java EE). In the web application config we add
<mime-mapping>
<extension>PDF</extension>
<mime-type>application/octet-stream</mime-type>
</mime-mapping>
This will make any pdf coming from your web application to be downloaded instead of the browser trying to render.
EDIT: looks like you are streaming it. In that case you will use a mime-type as application/octet-stream in your code and not in the config. So here instead of
Response.ContentType = "application/pdf"
you will use
Response.ContentType = "application/octet-stream"
What version of IE? I recall that Microsoft released a Hotfix for IE6 for this issue. Hope that is of some use?
I read of your Cache-control goose chase, but I'll share mine, that met my needs, in case it helps.
try to disable gzip compression.
Adding it here hoping that someone might find this useful instead going through the links.
Here is my code
byte[] bytes = // get byte array from DB
Response.Clear();
Response.ClearContent();
Response.ClearHeaders();
Response.Buffer = true;
// Prevent this page from being cached.
// NOTE: we cannot use the CacheControl property, or set the PRAGMA header value due to a flaw re: PDF/SSL/IE
Response.Expires = -1;
Response.ContentType = "application/pdf";
// Specify the number of bytes to be sent
Response.AppendHeader("content-length", bytes.Length.ToString());
Response.BinaryWrite(bytes);
// Wrap Up
Response.Flush();
Response.Close();
Response.End();
Like the OP I was scratching my head for days trying to get this to work, but I did it in the end so I thought I'd share my 'combination' of headers:
if (System.Web.HttpContext.Current.Request.Browser.Browser == "InternetExplorer"
&& System.Web.HttpContext.Current.Request.Browser.Version == "8.0")
{
System.Web.HttpContext.Current.Response.Clear();
System.Web.HttpContext.Current.Response.ClearContent();
System.Web.HttpContext.Current.Response.ClearHeaders();
System.Web.HttpContext.Current.Response.ContentType = "application/octet-stream";
System.Web.HttpContext.Current.Response.AppendHeader("Pragma", "public");
System.Web.HttpContext.Current.Response.AppendHeader("Cache-Control", "private, max-age=60");
System.Web.HttpContext.Current.Response.AppendHeader("Content-Transfer-Encoding", "binary");
System.Web.HttpContext.Current.Response.AddHeader("content-disposition", "attachment; filename=" + document.Filename);
System.Web.HttpContext.Current.Response.AddHeader("content-length", document.Data.LongLength.ToString());
System.Web.HttpContext.Current.Response.BinaryWrite(document.Data);
}
Hope that saves someone somewhere some pain!
I was running into a similar issue with attempting to stream a PDF over SSL and put this inside an iframe or object. I was finding that my aspx page would keep redirecting to the non-secure version of the URL, and the browser would block it.
I found switching from an ASPX page to a ASHX handler fixed my redirection problem.