Browser cache not expiring - asp.net

I am developing my application in asp.net and wanted force refreshing of .js files after certain period of time. In order to test that in local environment, I made following change to my web.config file:
<httpProtocol>
<customHeaders>
<add name="Cache-Control" value="must-revalidate max-age=120" />
</customHeaders>
</httpProtocol>
As I understand cache-control, based on above settings, browser cache should expire after 2:00 minutes and any new postback should send a request to webserver to check if newer version of .js is avialable or not. I used fiddler to monitor network traffic and I don't see any request for js file. The browser I am using is Chrome. So I used dev tools to monitor network traffic. I see that the browser is using cached version of that file.
Are there any other settings that would force the browser to expire the cached files?

Your Cache-Control is invalid, try separate values by comma: must-revalidate, max-age=120.
Relevant excerpt from RFC 7234 Hypertext Transfer Protocol (HTTP/1.1): Caching:
Cache-Control = 1#cache-directive
cache-directive = token [ "=" ( token / quoted-string ) ]
And ABNF List Extension from RFC 7230 Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing:
A construct "#" is defined, similar to "*", for defining
comma-delimited lists of elements. The full form is "<n>#<m>element"
indicating at least <n> and at most <m> elements, each separated by a
single comma (",") and optional whitespace (OWS).
Also make sure to put new configuration in right place. For more information check Is it possible to add response http headers in web.config?.

Related

Recommended CORS Allow and Expose Headers

enable-cors.org nginx config suggests using the below values for Access-Control-Allow-Headers and Access-Control-Expose-Headers. But there isn't much explanation of why these are recommended except Custom headers and headers various browsers *should* be OK with but aren't. I'd rather not inflate the payload for every API request if some of these are not needed for my application.
I know I could remove them and wait for something to break but I'm hoping for some background on why/how they were selected so I can make a more educated decision on whether they are necessary for my application. i.e. were they recommended to support a browser that my application doesn't need to support?
Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range
Access-Control-Expose-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range
For the Allow-Headers, I can understand for most of them why a client would want to send them. X-CustomHeader stands out as an oddball though. Also, I tested on Chrome that even if User-Agent isn't explicitly allowed, chrome still sends it. This implies that these options were added for browser compatibility that my app might not need.
For the Expose-Headers, it seems like it would be very application specific on which headers a client needs to read. Why would a client need to read User-Agent, DNT, or X-Requested-With? They contain info meant for the server to consume, not the client. Additionally, Cache-Control and Content-Range are already enabled by default so they seem redundant here.
I ended up going through each header and determining if it was necessary. I compiled a list of changes:
Changes for both Allow and Expose
Removed from both since they are non-standard headers
X-CustomHeader
Removed from both since they are non-standard and semi-deprecated
Keep-Alive
Changes for Allow:
Removed since they are response-specific headers (used only for
servers to inform client)
Content-Range
Kept even though they are enabled by default but only for certain
types of requests (as per MDN):
Content-Type
Changes for Expose:
Removed since they are already enabled by default (as per MDN)
Cache-Control
Content-Type
Removed since they are request-specific headers (used only for
clients to inform server)
DNT
User-Agent
X-Requested-With
If-Modified-Since
Range
Added since they seem useful
Content-Length
This leaves me with the following:
Access-Control-Allow-Headers: DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range
Access-Control-Expose-Headers: Content-Length,Content-Range
Any comments or corrections would be greatly appreciated.

How to enable CORS as I dont have WebApiConfig.cs

I want to enable CORS on my controller in ASP website. The tutorials show that I have to change WebApiConfig.cs and add a line config.EnableCors();
The problem is I dont have this file. The closes I have is RouteConfig.cs, where do enable CORS in my project then?
I have added the following section in web.config file
<httpProtocol>
<customHeaders>
<add name="Access-Control-Allow-Origin" value="*" />
</customHeaders>
</httpProtocol>
but the Ajax calls to the API's still return
the server responded with a status of 405 (Method Not Allowed)
When I call the API using the browser, it works.
CORS or Cross-Origin Resource Sharing is used for cross domain communication from the browser. The endpoint you're calling from the browser is the one to tell the browser if the request is allowed to be fulfilled and processed by the browser or not by sending Access-Control-* headers in the response.
When the browser receives the response it first checks if Access-Control-* headers exist and if the information in them matches your request, if they exist and the information in them matches the request then the browser will allow your custom client code to process the response, if headers are not there or the information in the headers do not match the request you made then the browser will block the response for any further processing.
What you have to do is to set those headers in your response.
For further reading about CORS headers: W3C CORS

Why is an OPTIONS request sent and can I disable it?

I am building a web API. I found whenever I use Chrome to POST, GET to my API, there is always an OPTIONS request sent before the real request, which is quite annoying. Currently, I get the server to ignore any OPTIONS requests. Now my question is what's good to send an OPTIONS request to double the server's load? Is there any way to completely stop the browser from sending OPTIONS requests?
edit 2018-09-13: added some precisions about this pre-flight request and how to avoid it at the end of this reponse.
OPTIONS requests are what we call pre-flight requests in Cross-origin resource sharing (CORS).
They are necessary when you're making requests across different origins in specific situations.
This pre-flight request is made by some browsers as a safety measure to ensure that the request being done is trusted by the server.
Meaning the server understands that the method, origin and headers being sent on the request are safe to act upon.
Your server should not ignore but handle these requests whenever you're attempting to do cross origin requests.
A good resource can be found here http://enable-cors.org/
A way to handle these to get comfortable is to ensure that for any path with OPTIONS method the server sends a response with this header
Access-Control-Allow-Origin: *
This will tell the browser that the server is willing to answer requests from any origin.
For more information on how to add CORS support to your server see the following flowchart
http://www.html5rocks.com/static/images/cors_server_flowchart.png
edit 2018-09-13
CORS OPTIONS request is triggered only in somes cases, as explained in MDN docs:
Some requests don’t trigger a CORS preflight. Those are called “simple requests” in this article, though the Fetch spec (which defines CORS) doesn’t use that term. A request that doesn’t trigger a CORS preflight—a so-called “simple request”—is one that meets all the following conditions:
The only allowed methods are:
GET
HEAD
POST
Apart from the headers set automatically by the user agent (for example, Connection, User-Agent, or any of the other headers with names defined in the Fetch spec as a “forbidden header name”), the only headers which are allowed to be manually set are those which the Fetch spec defines as being a “CORS-safelisted request-header”, which are:
Accept
Accept-Language
Content-Language
Content-Type (but note the additional requirements below)
DPR
Downlink
Save-Data
Viewport-Width
Width
The only allowed values for the Content-Type header are:
application/x-www-form-urlencoded
multipart/form-data
text/plain
No event listeners are registered on any XMLHttpRequestUpload object used in the request; these are accessed using the XMLHttpRequest.upload property.
No ReadableStream object is used in the request.
Have gone through this issue, below is my conclusion to this issue and my solution.
According to the CORS strategy (highly recommend you read about it) You can't just force the browser to stop sending OPTIONS request if it thinks it needs to.
There are two ways you can work around it:
Make sure your request is a "simple request"
Set Access-Control-Max-Age for the OPTIONS request
Simple request
A simple cross-site request is one that meets all the following conditions:
The only allowed methods are:
GET
HEAD
POST
Apart from the headers set automatically by the user agent (e.g. Connection, User-Agent, etc.), the only headers which are allowed to be manually set are:
Accept
Accept-Language
Content-Language
Content-Type
The only allowed values for the Content-Type header are:
application/x-www-form-urlencoded
multipart/form-data
text/plain
A simple request will not cause a pre-flight OPTIONS request.
Set a cache for the OPTIONS check
You can set a Access-Control-Max-Age for the OPTIONS request, so that it will not check the permission again until it is expired.
Access-Control-Max-Age gives the value in seconds for how long the response to the preflight request can be cached for without sending another preflight request.
Limitation Noted
For Chrome, the maximum seconds for Access-Control-Max-Age is 600 which is 10 minutes, according to chrome source code
Access-Control-Max-Age only works for one resource every time, for example, GET requests with same URL path but different queries will be treated as different resources. So the request to the second resource will still trigger a preflight request.
Please refer this answer on the actual need for pre-flighted OPTIONS request: CORS - What is the motivation behind introducing preflight requests?
To disable the OPTIONS request, below conditions must be satisfied for ajax request:
Request does not set custom HTTP headers like 'application/xml' or 'application/json' etc
The request method has to be one of GET, HEAD or POST. If POST, content type should be one of application/x-www-form-urlencoded, multipart/form-data, or text/plain
Reference:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS
When you have the debug console open and the Disable Cache option turned on, preflight requests will always be sent (i.e. before each and every request). if you don't disable the cache, a pre-flight request will be sent only once (per server)
Yes it's possible to avoid options request. Options request is a preflight request when you send (post) any data to another domain. It's a browser security issue. But we can use another technology: iframe transport layer. I strongly recommend you forget about any CORS configuration and use readymade solution and it will work anywhere.
Take a look here:
https://github.com/jpillora/xdomain
And working example:
http://jpillora.com/xdomain/
For a developer who understands the reason it exists but needs to access an API that doesn't handle OPTIONS calls without auth, I need a temporary answer so I can develop locally until the API owner adds proper SPA CORS support or I get a proxy API up and running.
I found you can disable CORS in Safari and Chrome on a Mac.
Disable same origin policy in Chrome
Chrome: Quit Chrome, open an terminal and paste this command: open /Applications/Google\ Chrome.app --args --disable-web-security --user-data-dir
Safari: Disabling same-origin policy in Safari
If you want to disable the same-origin policy on Safari (I have 9.1.1), then you only need to enable the developer menu, and select "Disable Cross-Origin Restrictions" from the develop menu.
As mentioned in previous posts already, OPTIONS requests are there for a reason. If you have an issue with large response times from your server (e.g. overseas connection) you can also have your browser cache the preflight requests.
Have your server reply with the Access-Control-Max-Age header and for requests that go to the same endpoint the preflight request will have been cached and not occur anymore.
I have solved this problem like.
if($_SERVER['REQUEST_METHOD'] == 'OPTIONS' && ENV == 'devel') {
header('Access-Control-Allow-Origin: *');
header('Access-Control-Allow-Headers: X-Requested-With');
header("HTTP/1.1 200 OK");
die();
}
It is only for development. With this I am waiting 9ms and 500ms and not 8s and 500ms. I can do that because production JS app will be on the same machine as production so there will be no OPTIONS but development is my local.
You can't but you could avoid CORS using JSONP.
you can also use a API Manager (like Open Sources Gravitee.io) to prevent CORS issues between frontend app and backend services by manipulating headers in preflight.
Header used in response to a preflight request to indicate which HTTP headers can be used when making the actual request :
content-type
access-control-allow-header
authorization
x-requested-with
and specify the "allow-origin" = localhost:4200 for example
After spending a whole day and a half trying to work through a similar problem I found it had to do with IIS.
My Web API project was set up as follows:
// WebApiConfig.cs
public static void Register(HttpConfiguration config)
{
var cors = new EnableCorsAttribute("*", "*", "*");
config.EnableCors(cors);
//...
}
I did not have CORS specific config options in the web.config > system.webServer node like I have seen in so many posts
No CORS specific code in the global.asax or in the controller as a decorator
The problem was the app pool settings.
The managed pipeline mode was set to classic (changed it to integrated) and the Identity was set to Network Service (changed it to ApplicationPoolIdentity)
Changing those settings (and refreshing the app pool) fixed it for me.
OPTIONS request is a feature of web browsers, so it's not easy to disable it. But I found a way to redirect it away with proxy. It's useful in case that the service endpoint just cannot handle CORS/OPTIONS yet, maybe still under development, or mal-configured.
Steps:
Setup a reverse proxy for such requests with tools of choice (nginx, YARP, ...)
Create an endpoint just to handle the OPTIONS request. It might be easier to create a normal empty endpoint, and make sure it handles CORS well.
Configure two sets of rules for the proxy. One is to route all OPTIONS requests to the dummy endpoint above. Another to route all other requests to actual endpoint in question.
Update the web site to use proxy instead.
Basically this approach is to cheat browser that OPTIONS request works. Considering CORS is not to enhance security, but to relax the same-origin policy, I hope this trick could work for a while. :)
One solution I have used in the past - lets say your site is on mydomain.com, and you need to make an ajax request to foreigndomain.com
Configure an IIS rewrite from your domain to the foreign domain - e.g.
<rewrite>
<rules>
<rule name="ForeignRewrite" stopProcessing="true">
<match url="^api/v1/(.*)$" />
<action type="Rewrite" url="https://foreigndomain.com/{R:1}" />
</rule>
</rules>
</rewrite>
on your mydomain.com site - you can then make a same origin request, and there's no need for any options request :)
It can be solved in case of use of a proxy that intercept the request and write the appropriate headers.
In the particular case of Varnish these would be the rules:
if (req.http.host == "CUSTOM_URL" ) {
set resp.http.Access-Control-Allow-Origin = "*";
if (req.method == "OPTIONS") {
set resp.http.Access-Control-Max-Age = "1728000";
set resp.http.Access-Control-Allow-Methods = "GET, POST, PUT, DELETE, PATCH, OPTIONS";
set resp.http.Access-Control-Allow-Headers = "Authorization,Content-Type,Accept,Origin,User-Agent,DNT,Cache-Control,X-Mx-ReqToken,Keep-Alive,X-Requested-With,If-Modified-Since";
set resp.http.Content-Length = "0";
set resp.http.Content-Type = "text/plain charset=UTF-8";
set resp.status = 204;
}
}
What worked for me was to import "github.com/gorilla/handlers" and then use it this way:
router := mux.NewRouter()
router.HandleFunc("/config", getConfig).Methods("GET")
router.HandleFunc("/config/emcServer", createEmcServers).Methods("POST")
headersOk := handlers.AllowedHeaders([]string{"X-Requested-With", "Content-Type"})
originsOk := handlers.AllowedOrigins([]string{"*"})
methodsOk := handlers.AllowedMethods([]string{"GET", "HEAD", "POST", "PUT", "OPTIONS"})
log.Fatal(http.ListenAndServe(":" + webServicePort, handlers.CORS(originsOk, headersOk, methodsOk)(router)))
As soon as I executed an Ajax POST request and attaching JSON data to it, Chrome would always add the Content-Type header which was not in my previous AllowedHeaders config.

Mixing OutputCache and Dynamic Range Requests in ASP.NET MVC

I'm using the RangeFilePathResult class to serve mp3 files from an MVC controller.
The action is defined as follows:
[CacheFilter]
[OutputCache(CacheProfile = "Mp3Cache")]
public RangeFilePathResult Mp3Completed(string f)
{
FileInfo info = new FileInfo(string.Format("C:\\test\\{0}.mp3", f));
return new RangeFilePathResult("audio/mpeg", info.FullName, info.LastWriteTimeUtc, info.Length);
}
And the cache policy as follows:
<caching>
<outputCacheSettings>
<outputCacheProfiles>
<add name="Mp3Cache" duration="3600" varyByParam="f" location="Any" />
</outputCacheProfiles>
</outputCacheSettings>
</caching>
Why does this work correctly as is? It seems that you would need explicit varyByHeader to ensure that range requests work with output caching? The problem I was addressing was that jPlayer on iOS would be unable to display the duration of MP3 files and would render NaN when served with with a traditional FilePathResult - it works under this implementation.
The most important thing here is that the response to range request is not typical 200 OK but 206 Partial Content.
In case of 206 Partial Content several additional conditions must be met:
The request must have included a Range header field indicating the desired range, and may have included an If-Range header field to make the request conditional
The response must include either a Content-Range header field indicating the range included with this response, or a multipart/byteranges Content-Type including Content-Range fields for each part.
The response must include ETag and/or Content-Location (if the header would have been sent in a 200 response to the same request)
The response must include Date
Now every caching mechanism (in case of location="Any" this is browser, proxy server and your hosting IIS) which supports HTTP protocol must know that 206 Partial Content is different than 200 OK and treat it accordingly. Below you can find most important rules of caching 206 Partial Content response:
Cache must not combine a 206 response with other previously cached content if the ETag or Last-Modified headers do not match exactly
Cache that does not support the Range and Content-Range headers must not cache 206 responses
To summary this up, you don't need to use varyByHeader because every cache which follows HTTP protocol knows that in case of 206 Partial Content the Range and Content-Range headers are part of variant.

ASP.NET MVC and IE caching - manipulating response headers ineffective

Background
I'm attempting to help a colleague debug an issue that hasn't been an issue for the past 6 months. After the most recent deployment of an ASP.NET MVC 2 application, FileResult responses that force a PDF file at the user for opening or saving are having trouble existing long enough on the client machine for the PDF reader to open them.
Earlier versions of IE (expecially 6) are the only browsers affected. Firefox and Chrome and newer versions of IE (>8) all behave as expected. With that in mind, the next section defines the actions necessary to recreate the issue.
Behavior
User clicks a link that points to an action method (a plain hyperlink with an href attribute).
The action method generates a PDF represented as a byte stream. The method always recreates the PDF.
In the action method, headers are set to instruct browsers how to cache the response. They are:
response.AddHeader("Cache-Control", "public, must-revalidate, post-check=0, pre-check=0");
response.AddHeader("Pragma", "no-cache");
response.AddHeader("Expires", "0");
For those unfamiliar with exactly what the headers do:
a. Cache-Control: public
Indicates that the response MAY be cached by any cache, even if it would normally be non-cacheable or cacheable only within a non- shared cache.
b. Cache-Control: must-revalidate
When the must-revalidate directive is present in a response received by a cache, that cache MUST NOT use the entry after it becomes stale to respond to a
subsequent request without first revalidating it with the origin server
c. Cache-Control: pre-check (introduced with IE5)
Defines an interval in seconds after which an entity must be checked for freshness. The check may happen after the user is shown the resource but ensures that on the next roundtrip the cached copy will be up-to-date.
d. Cache-Control: post-check (introduced with IE5)
Defines an interval in seconds after which an entity must be checked for freshness prior to showing the user the resource.
e. Pragma: no-cache (to ensure backwards compatibility with HTTP/1.0)
When the no-cache directive is present in a request message, an application SHOULD forward the request toward the origin server even if it has a cached copy of what is being requested
f. Expires
The Expires entity-header field gives the date/time after which the response is considered stale.
We return the file from the action
return File(file, "mime/type", fileName);
The user is presented with an Open/Save dialog box
Clicking "Save" works as expected, but clicking "Open" launches the PDF reader, but the temporary file IE stored has already been deleted by the time the reader tries to open the file, so it complains that the file is missing (and it is).
There are a half dozen other apps here that use the same headers to force Excel, CSV, PDF, Word, and a ton of other content at users and there's never been an issue.
The Question
Are the headers correct for what we're trying to do? We want the file to exist temporarily (get cached), but always be replaced by new versions even though the requests may be identical).
The response headers are set in the action method before return a FileResult. I've asked my colleague to try creating a new class that inherits from FileResult and to instead override the ExecuteResult method so that it modifies the headers and then does base.ExecuteResult() instead -- no status on that.
I have a hunch the "Expires" header of "0" is the culprit. According to this W3C article, setting it to "0" implies "already expired." I do want it to be expired, I just don't want IE to go removing it off of the filesystem before the application handling it gets a chance to open it.
As always, thanks!
Edit: The Solution
Upon further testing (using Fiddler to inspect the headers), we were seeing that the response headers we thought were getting set were not the ones being interpreted by the browser. Having not been familiar with the code myself, I was unaware of an underlying issue: the headers were getting stomped on outside of the action method.
Nonetheless, I'm going to leave this question open. Still outstanding is this: there seems to be some discrepancy between the Expires header having a value of 0 vs. -1. If anybody can lay claim to differences by design, in regards to IE, I would still like to hear about it. As for a solution though, the above headers do work as intended with the Expires value set to -1 in all browsers.
Update 1
The post How to control web page caching, across all browsers? describes in detail that caching can be prevented in all browsers with the help of setting Expires = 0. I'm still not sold on this 0 vs -1 argument...
I think you should just use
HttpContext.Current.Response.Cache.SetMaxAge (new TimeSpan (0));
or
HttpContext.Current.Response.Headers.Set ("Cache-Control", "private, max-age=0");
to set max-age=0 which means nothing more as the cache re-validating (see here). If you would be set additionally ETag in the header with some your custom checksum of hash from the data, the ETag from the previous request will be sent to the server. The server are able either to return the data or, in case that the data are exactly the same as before, it can return empty body and HttpStatusCode.NotModified as the status code. In the case the web browser will get the data from the local browser cache.
I recommend you to use Cache-Control: private which force two important things: 1) switch off caching the data on the proxy, which has sometimes very aggressive caching settings 2) it will allows the caching of the the data, but not permit sharing of the cache with another users. It can solve privacy problems because the data which you return to one user could be not allowed to read by another users. By the way the code HttpContext.Current.Response.Cache.SetMaxAge (new TimeSpan (0)) set Cache-Control: private, max-age=0 in the HTTP header by default. If you do want to use Cache-Control: public you can use SetCacheability (HttpCacheability.Public); to overwrite the behavior or use Headers.Set instead of Cache.SetMaxAge.
If you have interest to study more caching options of HTTP protocol I would recommend you to read the caching tutorial.
UPDATED: I decide to write some more information to clear my position. Corresponds to the information from the Wikipedia even so old web browsers like Mosaic 2.7, Netscape 2.0 and Internet Explorer 3.0 supports March 1996, pre-standard of HTTP/1.1 described in RFC 2068. So I suppose (but not test it) that the old web browsers support max-age=0 HTTP header. In any way Netscape 2.06 and Internet Explorer 4.0 definitively supports HTTP 1.1.
So you should ask you first: which HTML standards you use? Do you still use HTML 2.0 instead of more late HTML 3.2 published in January 1997? I suppose you use at least HTML 4.0 published in December 1997. So if you build your application at least in HTML 4.0, your site can be oriented on the web clients which supports HTTP 1.1 and ignore (don't support) the web clients which don't support HTTP 1.1.
Now about other "Cache-Control" headers as "private, max-age=0". Including of the headers is in my opinion is pure paranoia. As I have some caching problem myself I tried also to include different other headers, but later after reading carefully the section 14.9 of RFC2616 I use only "Cache-Control: private, max-age=0".
The only "Cache-Control" header which can be additionally discussed is "must-revalidate" described on the section 14.9.4 which I referenced before. Here is the quote:
The must-revalidate directive is necessary to support reliable
operation for certain protocol features. In all circumstances an
HTTP/1.1 cache MUST obey the must-revalidate directive; in particular,
if the cache cannot reach the origin server for any reason, it MUST
generate a 504 (Gateway Timeout) response.
Servers SHOULD send the must-revalidate directive if and only if
failure to revalidate a request on the entity could result in
incorrect operation, such as a silently unexecuted financial
transaction. Recipients MUST NOT take any automated action that
violates this directive, and MUST NOT automatically provide an
unvalidated copy of the entity if revalidation fails.
Although this is
not recommended, user agents operating under severe connectivity
constraints MAY violate this directive but, if so, MUST explicitly
warn the user that an unvalidated response has been provided. The
warning MUST be provided on each unvalidated access, and SHOULD
require explicit user confirmation.
Sometime if I have problem with Internet connection I see the empty page with "Gateway Timeout" message. It come from the the usage of "must-revalidate" directive. I don't think that "Gateway Timeout" message really help the user.
So the persons, how prefer to start self-destructive procedure if he hears "Busy" signal on the call to his boss, should additionally use "must-revalidate" directive in the "Cache-Control" header. Other persons I recommend just use "Cache-Control: private, max-age=0" and nothing more.
For IE, I remember having to set Expires: -1. How to prevent caching in Internet Explorer seems to confirm this with the following code snippet.
<% Response.CacheControl = "no-cache" %>
<% Response.AddHeader "Pragma", "no-cache" %>
<% Response.Expires = -1 %>
Looking back in code, this is what I found. Also, I vaguely remember that if you set Cache-Control: private is may not behave correctly with SSL.
Response.AddHeader("Cache-Control", "no-cache");
Response.AddHeader("Expires", "-1");
Also, So, You Don't Want To Cache, Huh? mentions -1, but uses methods on Response.Cache instead:
// Stop Caching in IE
Response.Cache.SetCacheability(System.Web.HttpCacheability.NoCache);
// Stop Caching in Firefox
Response.Cache.SetNoStore();
However, ASP Page caching issue (IE8) says this code doesn't work.

Resources