ASP.NET Response.Cache.SetNoStore() vs. Response.Cache.SetNoServerCaching() - asp.net

Can anyone break down what these two methods do at a HTTP level.
We are dealing with Akamai edge-caching and have been told that SetNoStore() will cause can exclusion so that (for example) form pages will always post back to the origin server. According to {guy} this sets the HTTP header:
Cache-Control: "no-cache, no-store"
As I was implementing this change to our forms I found SetNoServerCaching(). Well that seems to make a bit more sense semantically, and the documentation says "Explicitly denies caching of the document on the origin-server."
So I went down to the sea sea sea to see what I could see see see. I tried both of these methods and reviewed the headers in Firebug and Fiddler.
And from what I can tell, both these method set the exact same Http Header.
Can anyone explain if there are actual differences between these methods and if so, where are hiding in the http response?!

Theres a few differences,
SetNoStore, essentially stops the browser (and any network resource such as a CDN) from saving any part of the response or request, that includes saving to temp files. This will set the NO-STORE HTTP 1.1 header
SetNoServerCaching, will essentially stop the server from saving files, in ASP.NET There are several levels of caching that can happen, Data only, Partial Requests, Full Pages, and SQL Data. This call should stop the HTTP (Full and Partial) requests being saved on the server. This method should not set the cache-control headers or no-store or no cache.
There is also
Response.Cache.SetCacheability(HttpCacheability.Public);
Response.Cache.SetMaxAge(new TimeSpan(1, 0, 0));
as a possible way of setting cache, this will set the content-expires header.
For a CDN you probably want to set the content-expires header so that he CDN knows when to fetch new content, it if it gets a HIT. You probably don't want no-cache or no-store as this would cause a refetch on every HIT so essentially you are nullifying any benefit the CDN brings to you except they may have a faster backbone connection to the end user than your current ISP but that would be marginal.

Differnce between the two is
HttpCachePolicy.SetNoStore() or Response.Cache.SetNoStore:
Prevents the browser from caching the ASPX page.
HttpCachePolicy.SetNoServerCaching or Response.Cache.SetNoServerCaching:
Stops all origin-server caching for the current response. Explicitly denies caching of the document on the origin-server. Once set, all requests for the document are fully processed.
When these methods are invoked, caching cannot be reenabled for the current response.

Related

Checking if HTTP resource has changed after maximum cache time has expired

I'm trying to work out a new caching policy for the static resources on a website. A common problem is whenever javascript, CSS etc. is updated, many users hold onto stale versions because currently there are no caching specific HTTP headers included in the file responses.
This becomes a serious problem when, for example, the javascript updates are linked to server-side updates, and the stale javascript chokes on the new server responses.
Eliminating browser caching completely with a cache-control: max-age=0, no-cache seems like overkill, since I'd still like to take some pressure off the server by letting browsers cache temporarily. So, setting the cache policy to a maximum of one hour seems alright, like cache-control: max-age=3600, no-cache.
My understanding is that this will always fetch a new copy of the resource if the cached copy is older than one hour. I'd specifically like to know if it's possible to set a HTTP header or combination of headers that will instruct browsers to only fetch a new copy if the resource was last checked more than one hour ago AND if the resource has changed.
I'm just trying to avoid browsers blindly fetching new copies just because the cached resource is older than one hour, so I'd also like to add the condition that the resource has been changed.
Just to illustrate further what I'm asking:
New user arrives at site and gets fresh copy of script.js
User stays on site for 45 mins, browser uses cached copy of script.js all the time
User comes back to site 2 hours later, and browser asks the server if script.js has changed
If it has, then it gets a fresh copy and the process repeats
If it has not changed, then it uses the cached copy for the next hour, after which it will check again
Have I misunderstood things? Is what I'm asking how it actually works, or do I have to do something different?
Have I misunderstood things? Is what I'm asking how it actually works,
or do I have to do something different?
You have some serious misconceptions about what the various cache control directives do and why cache behaves as it does.
Eliminating browser caching completely with a cache-control:
max-age=0, no-cache seems like overkill, since I'd still like to take
some pressure off the server by letting browsers cache temporarily ...
The no-cache option is wrong too. Including it means the browser will always
check with the server for modifications to the file every time.
That isn't what the no-cache means or what it is intended for - it means that a client MUST NOT used a cached copy to satisfy a subsequent request without successful revalidation - it does not and has never meant "do not cache" - that is what the no-store directive is for
Also the max-age directive is just the primary means for caches to calculate the freshness lifetime and expiration time of cached entries. The Expires header (minus the value of the Date header can also be used) - as can a heuristic based on the current UTC time and any Last-Modified header value.
Really if your goal is to retain the cached copy of a resource for as long as it is meaningful - whilst minimising requests and responses you have a number of options.
The Etag (Entity Tag) header - this is supplied by the server in response to a request in either a "strong" or "weak" form. It is usually a hash based on the resource in question. When a client re-requests a resource it can pass the stored value of the Etag with the If-None-Match request header. If the resource has not changed then the server will respond with 304 Not Modified.
You can think Etags as fingerprints for resources. They can be used to massively reduce the amount of information sent over the wire - as only fresh data is served - but they do not have any bearing on the number of times or frequency of requests.
The last-modified header - this is supplied by the server in response to a request in HTTPdate format - it tells the client the last time the resource was modified.
When a client re-requests a resource it can pass the stored value of the last-modified header with the If-Modified-Since request header. If the resource has not changed since the time it was last modified then the server will respond with 304 Not Modified.
You can think of last modified as a weaker form of entity checking than Etags. It addresses the same problem (bandwidth/redundancy) it in a less robust way and again it has no bearing at all on the actual number of requests made.
Revving - a technique that use a combination of the Expires header and the name (URN) of a resource. (see stevesouders blog post)
Here one basically sets a far forward Expires header - say 5 years from now - to ensure the static resource is cached for a long time.
You then have have two options for updating - either by appending a versioning query string to the requests URL - e.g. "/mystyles.css?v=1.1" - and updating the version number as and when the resource changes. Or better - versioning the file name itself e.g. "/mystyles.v1.1.css" so that each version is cached for as long as possible.
This way not only do you reduce the amount of bandwidth - you will as eliminate all checks to see if the resource has changed until you rename it.
I suppose the main point here is none of the catch control directives you mention max-age, public, etc have any bearing at all on if a 304 response is generated or not. For that use either Etag / If-None-Match or last-modified / If-Modified-Since or a combination of them (with If-Modified-Since as a fallback mechanism to If-None-Match).
It seems that I have misunderstood how it works, because some testing in Chrome has revealed exactly the behavior that I was looking for in the 5 steps I mentioned.
It doesn't blindly grab a fresh copy from the server when the max-age has expired. It does a GET, and if the response is 304 (Not Modified), it continues using the cached copy until the next hour has expired, at which point it checks for changes again etc.
The no-cache option is wrong too. Including it means the browser will always check with the server for modifications to the file every time. So what I was really looking for is:
Cache-Control: public, max-age=3600

Is caching in browser automatic?

I have a JavaScript app that sends requests to REST API, the responses from server have cache headers (like ETag, cache-control, expires). Is caching of responses in browser automatic, or the app must implement some sort of mechanism to save the data?
An AJAX request is no different from a normal request - it's a GET/POST/HEAD/whatever request being sent by the browser, and it is handled as such. This is confirmed here:
The HTTP and Cache sub-systems of modern browsers are at a much lower level than Ajax’s XMLHttpRequest object. At this level, the browser doesn’t know or care about Ajax requests. It simply obeys the normal HTTP caching rules based on the response headers returned from the server.
As per the jQuery documentation, caches can also be invalidated in at least one usual way (appending a query string):
cache (default: true, false for dataType 'script' and 'jsonp')
Type: Boolean
If set to false, it will force requested pages not to be cached by the browser. Note: Setting cache to false will only work correctly with HEAD and GET requests. It works by appending "_={timestamp}" to the GET parameters. The parameter is not needed for other types of requests, except in IE8 when a POST is made to a URL that has already been requested by a GET.
So in short, given the same headers, AJAX responses are cached the same way as other requests.
Browser handles automatically cache of resources. What you seem to be asking about is the actuall response from the server.
You will need to set that up yourself in your application. You can do so both on front-end and backend.
Most of JS frameworks have cache control implemented, for example:
jQuery
$.ajaxSetup({
// Disable caching of AJAX responses
cache: false
});
AngularJS
$http.defaults.cache = false;
etc.
On backend it really depends on what language you using, what server engine etc.
Checkout Memcached for example
http://memcached.org/
As with anything with web development there are odd things here and there, for example some IE versions are automatically chaching requests and you have to add unique id to the url to prevent that.
From https://developer.mozilla.org/en-US/docs/AJAX/Getting_Started :
Note 2: If you do not set header Cache-Control: no-cache the browser will cache the response and never re-submit the request, making debugging "challenging." You can also append an always-diferent aditional GET parameter, like the timestamp or a random number (see bypassing the cache)
Browser should handle the cache automatically.
Check this article, there's only clear cache method for javascript.
https://developer.chrome.com/extensions/browsingData
If the server sends the response with any of cache headers browsers should respect it. there is no difference between resources or ajax requests.
also you can specify cache headers in your ajax calls to make it not use caches and fetch the whole response from the server.
Most modern browsers support browser caching because of the cache and expire headers
http://www.arlocarreon.com/blog/http/http-requests-and-your-browsers-cache/
There are 2 apects of an HTTP request that can qualify it for being cached:
- Specifc HTTP cache and expire headers
- A unique URL
Interesting read:
http://www.mobify.com/blog/beginners-guide-to-http-cache-headers/

Supporting both ASP.NET Caching and ETag/Conditional GET in WCF WebHttp Service

I am trying to implement a REST web service with WCF that supports both caching and Conditional GETs.
I implemented basic caching following the instructions in MSDN: Caching Support for WCF Web HTTP Services. That means adding an [AspNetCacheProfile("MyOutputCacheProfile")] attribute to each of my web methods and adding appropriate entries to web.config. That seems to work correctly: cached responses are returned when identical arguments are passed to the web methods.
Then I added support for Conditional GET by calculating an ETag value and setting that on the response like this:
WebOperationContext.Current.OutgoingResponse.SetETag(myETag);
That sorta works: I can see the ETag header in the response the first time I call the web method.
But here's the problem: The next time I invoke that web method with the same arguments, a cached response is returned, and the cached response does not include the ETag header. (If I wait until cache expiration, or disable caching entirely, then the ETag headers are returned properly.)
So, is there any way get the cached responses to include that ETag value?
Update: After some more study and experimentation, I find that doing this causes the ETag header to be included in all cached responses:
HttpContext.Current.Response.Cache.SetETag(myETag);
If I call that, then I don't need to call the associated WebOperationContext...SetETag() operation to make everything work.
Is this the Right Way to do this?
Correct me if I am wrong. Restful service are more close to Http and Http caching says that
The goal of caching in HTTP/1.1 is to eliminate the need to send
requests in many cases, and to eliminate the need to send full
responses in many other cases. The former reduces the number of
network round-trips required for many operations; we use an
"expiration" mechanism for this purpose (see section 13.2). The latter
reduces network bandwidth requirements; we use a "validation"
mechanism for this purpose (see section 13.3).
Asp.net caching does not fall in any one of this category(neither expiration nor validation).The caching is only done on web server and IIS instead of executing the method, sends the stored response. Some how it does not fit in RESTful model.
To implement caching, we should add Cache Control Headers and Etag to response headers and then try to handle conditional Get. Please consult this excellent article.

ASP.NET MVC and IE caching - manipulating response headers ineffective

Background
I'm attempting to help a colleague debug an issue that hasn't been an issue for the past 6 months. After the most recent deployment of an ASP.NET MVC 2 application, FileResult responses that force a PDF file at the user for opening or saving are having trouble existing long enough on the client machine for the PDF reader to open them.
Earlier versions of IE (expecially 6) are the only browsers affected. Firefox and Chrome and newer versions of IE (>8) all behave as expected. With that in mind, the next section defines the actions necessary to recreate the issue.
Behavior
User clicks a link that points to an action method (a plain hyperlink with an href attribute).
The action method generates a PDF represented as a byte stream. The method always recreates the PDF.
In the action method, headers are set to instruct browsers how to cache the response. They are:
response.AddHeader("Cache-Control", "public, must-revalidate, post-check=0, pre-check=0");
response.AddHeader("Pragma", "no-cache");
response.AddHeader("Expires", "0");
For those unfamiliar with exactly what the headers do:
a. Cache-Control: public
Indicates that the response MAY be cached by any cache, even if it would normally be non-cacheable or cacheable only within a non- shared cache.
b. Cache-Control: must-revalidate
When the must-revalidate directive is present in a response received by a cache, that cache MUST NOT use the entry after it becomes stale to respond to a
subsequent request without first revalidating it with the origin server
c. Cache-Control: pre-check (introduced with IE5)
Defines an interval in seconds after which an entity must be checked for freshness. The check may happen after the user is shown the resource but ensures that on the next roundtrip the cached copy will be up-to-date.
d. Cache-Control: post-check (introduced with IE5)
Defines an interval in seconds after which an entity must be checked for freshness prior to showing the user the resource.
e. Pragma: no-cache (to ensure backwards compatibility with HTTP/1.0)
When the no-cache directive is present in a request message, an application SHOULD forward the request toward the origin server even if it has a cached copy of what is being requested
f. Expires
The Expires entity-header field gives the date/time after which the response is considered stale.
We return the file from the action
return File(file, "mime/type", fileName);
The user is presented with an Open/Save dialog box
Clicking "Save" works as expected, but clicking "Open" launches the PDF reader, but the temporary file IE stored has already been deleted by the time the reader tries to open the file, so it complains that the file is missing (and it is).
There are a half dozen other apps here that use the same headers to force Excel, CSV, PDF, Word, and a ton of other content at users and there's never been an issue.
The Question
Are the headers correct for what we're trying to do? We want the file to exist temporarily (get cached), but always be replaced by new versions even though the requests may be identical).
The response headers are set in the action method before return a FileResult. I've asked my colleague to try creating a new class that inherits from FileResult and to instead override the ExecuteResult method so that it modifies the headers and then does base.ExecuteResult() instead -- no status on that.
I have a hunch the "Expires" header of "0" is the culprit. According to this W3C article, setting it to "0" implies "already expired." I do want it to be expired, I just don't want IE to go removing it off of the filesystem before the application handling it gets a chance to open it.
As always, thanks!
Edit: The Solution
Upon further testing (using Fiddler to inspect the headers), we were seeing that the response headers we thought were getting set were not the ones being interpreted by the browser. Having not been familiar with the code myself, I was unaware of an underlying issue: the headers were getting stomped on outside of the action method.
Nonetheless, I'm going to leave this question open. Still outstanding is this: there seems to be some discrepancy between the Expires header having a value of 0 vs. -1. If anybody can lay claim to differences by design, in regards to IE, I would still like to hear about it. As for a solution though, the above headers do work as intended with the Expires value set to -1 in all browsers.
Update 1
The post How to control web page caching, across all browsers? describes in detail that caching can be prevented in all browsers with the help of setting Expires = 0. I'm still not sold on this 0 vs -1 argument...
I think you should just use
HttpContext.Current.Response.Cache.SetMaxAge (new TimeSpan (0));
or
HttpContext.Current.Response.Headers.Set ("Cache-Control", "private, max-age=0");
to set max-age=0 which means nothing more as the cache re-validating (see here). If you would be set additionally ETag in the header with some your custom checksum of hash from the data, the ETag from the previous request will be sent to the server. The server are able either to return the data or, in case that the data are exactly the same as before, it can return empty body and HttpStatusCode.NotModified as the status code. In the case the web browser will get the data from the local browser cache.
I recommend you to use Cache-Control: private which force two important things: 1) switch off caching the data on the proxy, which has sometimes very aggressive caching settings 2) it will allows the caching of the the data, but not permit sharing of the cache with another users. It can solve privacy problems because the data which you return to one user could be not allowed to read by another users. By the way the code HttpContext.Current.Response.Cache.SetMaxAge (new TimeSpan (0)) set Cache-Control: private, max-age=0 in the HTTP header by default. If you do want to use Cache-Control: public you can use SetCacheability (HttpCacheability.Public); to overwrite the behavior or use Headers.Set instead of Cache.SetMaxAge.
If you have interest to study more caching options of HTTP protocol I would recommend you to read the caching tutorial.
UPDATED: I decide to write some more information to clear my position. Corresponds to the information from the Wikipedia even so old web browsers like Mosaic 2.7, Netscape 2.0 and Internet Explorer 3.0 supports March 1996, pre-standard of HTTP/1.1 described in RFC 2068. So I suppose (but not test it) that the old web browsers support max-age=0 HTTP header. In any way Netscape 2.06 and Internet Explorer 4.0 definitively supports HTTP 1.1.
So you should ask you first: which HTML standards you use? Do you still use HTML 2.0 instead of more late HTML 3.2 published in January 1997? I suppose you use at least HTML 4.0 published in December 1997. So if you build your application at least in HTML 4.0, your site can be oriented on the web clients which supports HTTP 1.1 and ignore (don't support) the web clients which don't support HTTP 1.1.
Now about other "Cache-Control" headers as "private, max-age=0". Including of the headers is in my opinion is pure paranoia. As I have some caching problem myself I tried also to include different other headers, but later after reading carefully the section 14.9 of RFC2616 I use only "Cache-Control: private, max-age=0".
The only "Cache-Control" header which can be additionally discussed is "must-revalidate" described on the section 14.9.4 which I referenced before. Here is the quote:
The must-revalidate directive is necessary to support reliable
operation for certain protocol features. In all circumstances an
HTTP/1.1 cache MUST obey the must-revalidate directive; in particular,
if the cache cannot reach the origin server for any reason, it MUST
generate a 504 (Gateway Timeout) response.
Servers SHOULD send the must-revalidate directive if and only if
failure to revalidate a request on the entity could result in
incorrect operation, such as a silently unexecuted financial
transaction. Recipients MUST NOT take any automated action that
violates this directive, and MUST NOT automatically provide an
unvalidated copy of the entity if revalidation fails.
Although this is
not recommended, user agents operating under severe connectivity
constraints MAY violate this directive but, if so, MUST explicitly
warn the user that an unvalidated response has been provided. The
warning MUST be provided on each unvalidated access, and SHOULD
require explicit user confirmation.
Sometime if I have problem with Internet connection I see the empty page with "Gateway Timeout" message. It come from the the usage of "must-revalidate" directive. I don't think that "Gateway Timeout" message really help the user.
So the persons, how prefer to start self-destructive procedure if he hears "Busy" signal on the call to his boss, should additionally use "must-revalidate" directive in the "Cache-Control" header. Other persons I recommend just use "Cache-Control: private, max-age=0" and nothing more.
For IE, I remember having to set Expires: -1. How to prevent caching in Internet Explorer seems to confirm this with the following code snippet.
<% Response.CacheControl = "no-cache" %>
<% Response.AddHeader "Pragma", "no-cache" %>
<% Response.Expires = -1 %>
Looking back in code, this is what I found. Also, I vaguely remember that if you set Cache-Control: private is may not behave correctly with SSL.
Response.AddHeader("Cache-Control", "no-cache");
Response.AddHeader("Expires", "-1");
Also, So, You Don't Want To Cache, Huh? mentions -1, but uses methods on Response.Cache instead:
// Stop Caching in IE
Response.Cache.SetCacheability(System.Web.HttpCacheability.NoCache);
// Stop Caching in Firefox
Response.Cache.SetNoStore();
However, ASP Page caching issue (IE8) says this code doesn't work.

Why both no-cache and no-store should be used in HTTP response?

I'm told to prevent user-info leaking, only "no-cache" in response is not enough. "no-store" is also necessary.
Cache-Control: no-cache, no-store
After reading this spec http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html, I'm still not quite sure why.
My current understanding is that it is just for intermediate cache server. Even if "no-cache" is in response, intermediate cache server can still save the content to non-volatile storage. The intermediate cache server will decide whether using the saved content for following request. However, if "no-store" is in the response, the intermediate cache sever is not supposed to store the content. So, it is safer.
Is there any other reason we need both "no-cache" and "no-store"?
I must clarify that no-cache does not mean do not cache. In fact, it means "revalidate with server" before using any cached response you may have, on every request.
must-revalidate, on the other hand, only needs to revalidate when the resource is considered stale.
If the server says that the resource is still valid then the cache can respond with its representation, thus alleviating the need for the server to resend the entire resource.
no-store is effectively the full do not cache directive and is intended to prevent storage of the representation in any form of cache whatsoever.
I say whatsoever, but note this in the RFC 2616 HTTP spec:
History buffers MAY store such responses as part of their normal operation
But this is omitted from the newer RFC 7234 HTTP spec in potentially an attempt to make no-store stronger, see:
https://www.rfc-editor.org/rfc/rfc7234#section-5.2.1.5
Under certain circumstances, IE6 will still cache files even when Cache-Control: no-cache is in the response headers.
The W3C states of no-cache:
If the no-cache directive does not
specify a field-name, then a cache
MUST NOT use the response to satisfy a
subsequent request without successful
revalidation with the origin server.
In my application, if you visited a page with the no-cache header, then logged out and then hit back in your browser, IE6 would still grab the page from the cache (without a new/validating request to the server). Adding in the no-store header stopped it doing so. But if you take the W3C at their word, there's actually no way to control this behavior:
History buffers MAY store such responses as part of their normal operation.
General differences between browser history and the normal HTTP caching are described in a specific sub-section of the spec.
From the HTTP 1.1 specification:
no-store:
The purpose of the no-store directive is to prevent the inadvertent release or retention of sensitive information (for example, on backup tapes). The no-store directive applies to the entire message, and MAY be sent either in a response or in a request. If sent in a request, a cache MUST NOT store any part of either this request or any response to it. If sent in a response, a cache MUST NOT store any part of either this response or the request that elicited it. This directive applies to both non- shared and shared caches. "MUST NOT store" in this context means that the cache MUST NOT intentionally store the information in non-volatile storage, and MUST make a best-effort attempt to remove the information from volatile storage as promptly as possible after forwarding it.
Even when this directive is associated with a response, users might explicitly store such a response outside of the caching system (e.g., with a "Save As" dialog). History buffers MAY store such responses as part of their normal operation.
The purpose of this directive is to meet the stated requirements of certain users and service authors who are concerned about accidental releases of information via unanticipated accesses to cache data structures. While the use of this directive might improve privacy in some cases, we caution that it is NOT in any way a reliable or sufficient mechanism for ensuring privacy. In particular, malicious or compromised caches might not recognize or obey this directive, and communications networks might be vulnerable to eavesdropping.
no-store should not be necessary in normal situations, and can harm both speed and usability. It is intended for use where the HTTP response contains information so sensitive it should never be written to a disk cache at all, regardless of the negative effects that creates for the user.
How it works:
Normally, even if a user agent such as a browser determines that a response shouldn't be cached, it may still store it to the disk cache for reasons internal to the user agent. This version may be utilised for features like "view source", "back", "page info", and so on, where the user hasn't necessarily requested the page again, but the browser doesn't consider it a new page view and it would make sense to serve the same version the user is currently viewing.
Using no-store will prevent that response being stored, but this may impact the browser's ability to give "view source", "back", "page info" and so on without making a new, separate request for the server, which is undesirable. In other words, the user may try viewing the source and if the browser didn't keep it in memory, they'll either be told this isn't possible, or it will cause a new request to the server. Therefore, no-store should only be used when the impeded user experience of these features not working properly or quickly is outweighed by the importance of ensuring content is not stored in the cache.
My current understanding is that it is just for intermediate cache server. Even if "no-cache" is in response, intermediate cache server can still save the content to non-volatile storage.
This is incorrect. Intermediate cache servers compatible with HTTP 1.1 will obey the no-cache and must-revalidate instructions, ensuring that content is not cached. Using these instructions will ensure that the response is not cached by any intermediate cache, and that all subsequent requests are sent back to the origin server.
If the intermediate cache server does not support HTTP 1.1, then you will need to use Pragma: no-cache and hope for the best. Note that if it doesn't support HTTP 1.1 then no-store is irrelevant anyway.
If you want to prevent all caching (e.g. force a reload when using the back button) you need:
no-cache for IE
no-store for Firefox
There's my information about this here:
http://blog.httpwatch.com/2008/10/15/two-important-differences-between-firefox-and-ie-caching/
For chrome, no-cache is used to reload the page on a re-visit, but it still caches it if you go back in history (back button). To reload the page for history-back as well, use no-store. IE needs must-revalidate to work in all occasions.
So just to be sure to avoid all bugs and misinterpretations I always use
Cache-Control: no-store, no-cache, must-revalidate
if I want to make sure it reloads.
If a caching system correctly implements no-store, then you wouldn't need no-cache. But not all do. Additionally, some browsers implement no-cache like it was no-store. Thus, while not strictly required, it's probably safest to include both.
Note that Internet Explorer from version 5 up to 8 will throw an error when trying to download a file served via https and the server sending Cache-Control: no-cache or Pragma: no-cache headers.
See http://support.microsoft.com/kb/812935/en-us
The use of Cache-Control: no-store and Pragma: private seems to be the closest thing which still works.
Originally we used no-cache many years ago and did run into some problems with stale content with certain browsers... Don't remember the specifics unfortunately.
We had since settled on JUST the use of no-store. Have never looked back or had a single issue with stale content by any browser or intermediaries since.
This space is certainly dominated by reality of implementations vs what happens to have been written in various RFCs. Many proxies in particular tend to think they do a better job of "improving performance" by replacing the policy they are supposed to be following with their own.
Just to make things even worse, in some situations, no-cache can't be used, but no-store can:
http://faindu.wordpress.com/2008/04/18/ie7-ssl-xml-flex-error-2032-stream-error/
To answer the question, there are two players here, the client (request) and the server (response).
Client:
The client can only request with ONE cache method. There are different methods and if not specified, will use default.
default: Inspect browser cache:
If cached and "fresh": Return from cache.
If cached, stale, but still "valid": Return from cache, and schedule a fetch to update cache (for next use).
If cached and stale: Fetch with conditions, cache, and return.
If not cached: Fetch, cache, and return.
no-store: Fetch and return.
reload: Fetch, cache, and return. (default-4)
no-cache: Inspect browser cache:
If cached: Fetch with conditions, cache, and return. (default-3)
If not cached: Fetch, cache, and return. (default-4)
force-cache: Inspect browser cache:
If cached: Return it regardless if stale.
If not cache: Fetch, cache, and return. (default-4)
only-if-cached: Inspect browser cache:
If cached: Return it regardless if stale.
If not cached: Throw network error.
Notes:
Still "valid" means the current age is within the stale-while-revalidate lifetime. It needs "revalidation", but is still acceptable to return.
"Fetch" here, for simplicity, is short for "non-conditional network
fetch".
"Fetch with conditions" means fetch using headers like
If-Modified-Since, or ETag so the server can respond with 304: (Not Modified).
https://fetch.spec.whatwg.org/#concept-request-cache-mode
Server::
Now that we understand what the client can do, the server responses make more sense.
Looking at the Cache-Control header, if the server returns:
no-store: Tells client to not use cache at all
no-cache: Tells client it should do conditional requests and ignore freshness
max-age: Tells client how long a cache is "fresh"
stale-while-revalidate: Tells client how long cache is "valid"
immutable: Cache forever
Now we can put it all together. That means the only possibilities are:
Non-conditional network fetch
Conditional network fetch
Return stale cache
Return stale but valid cache
Return fresh cache
Return any cache
Any combination of client, or server can dictate what method, or set of methods, to use. If the server returns no-store, it's not going to hit the cache, no matter what the client request type. If the client request was no-store, it doesn't matter what the server returns, it won't cache. If the client doesn't specify a request type, the server will dictate it with Cache-Control.
It makes no sense for a server to return both no-cache and no-store since no-store overrides everything. Yes, you've probably seen both together, and it's useless outside of broken browser implementations. Still, no-store has been part of spec since 1999: https://datatracker.ietf.org/doc/html/rfc2616#section-14.9.2
In real life usage, if your server supports 304: Not Modified, and you want to use client cache as a way to improve speed, but still want to force a network fetch, use no-cache. If don't support 304, and want to force a network fetch, use no-store. If you're okay with cache sometimes, use freshness and revalidation headers.
In reality, if you're mixing up no-cache and no-store on the client, very little would change. Then, just a couple of headers get sent and there will different internal responses handled by the browser. An issue can occur if you use no-cache and then forget to use it later. no-cache tells it to store the response in the cache, and a later request without it might trigger internal cache.
There are times when you may want to mix methods even on the same resource based on context. For example, you may want to use reload on a service worker and background sync, but use default for the web page itself. This is where you can manipulate the user agent (browser) cache to your liking. Just remember that the server generally has the final say as to how the cache should work.
To clarify some possible future confusion. The client can use the Cache-Control header on the request, to tell the server to not use its own cache system when responding. This is unrelated to the browser/server dynamic, and more about the server/database dynamic.
Also no-store technically means must not store to any non-volatile storage (disk) and release it from volatile storage (memory) ASAP. In practice, it means don't use a cache at all. The command actually goes both ways. A client request with no-store shouldn't write to disk or database and is meant to transient.
TL;DR: no-store overrides no-cache. Setting both is useless, unless we are talking out-of-spec or HTTP/1.0 browsers that don't support no-store (Maybe IE11?). Use no-cache for 304 support.
A pretty old topic but I'll share some recent ideas:
no-store: Must not attempt to store anything, and must also take action to delete any copy it might have.
no-cache: Never use a local copy without first validating with the origin server. It prevents all possibility of a cache hit, even with fresh resources.
So, answering the question, using only one of them is enough.
Also, some (not very) recent works prove that browsers are more Cache-Control compatible nowadays.
OWASP discusses this:
What's the difference between the cache-control directives: no-cache, and no-store?
The no-cache directive in a response indicates that the response must not be used to serve a subsequent request i.e. the cache must not display a response that has this directive set in the header but must let the server serve the request. The no-cache directive can include some field names; in which case the response can be shown from the cache except for the field names specified which should be served from the server. The no-store directive applies to the entire message and indicates that the cache must not store any part of the response or any request that asked for it.
Am I totally safe with these directives?
No. But generally, use both Cache-Control: no-cache, no-store and Pragma: no-cache, in addition to Expires: 0 (or a sufficiently backdated GMT date such as the UNIX epoch). Non-html content types like pdf, word documents, excel spreadsheets, etc often get cached even when the above cache control directives are set (although this varies by version and additional use of must-revalidate, pre-check=0, post-check=0, max-age=0, and s-maxage=0 in practice can sometimes result at least in file deletion upon browser closure in some cases due to browser quirks and HTTP implementations). Also, 'Autocomplete' feature allows a browser to cache whatever the user types in an input field of a form. To check this, the form tag or the individual input tags should include 'Autocomplete="Off" ' attribute. However, it should be noted that this attribute is non-standard (although it is supported by the major browsers) so it will break XHTML validation.
Source here.

Resources