I have a php-based web application that does some internal caching of content fetched from another CMS.
When editors modify content in the CMS and then reload the web application's website, it will still deliver the cached content. (The CMS does not know of the web application, so there is no automatic cache invalidation possible.)
Now I would like to add a way for editors to use the web application without caching. This could be done with an URL GET parameter, as TYPO3 does it with no_cache=1.
This requires manual intervention.
It would be much cooler if there was a browser extension that could be used to toggle caching on/off, and which would just inject an HTTP header in the GET request. The web application would react to that header and internally disable caching.
So my question: Is such a HTTP header used in the wild? How is it called?
Yes, there is such a header: Cache-Control.
You might be familiar with this as a response header, but a client can also send it as a request header. In your case, you'd set Cache-Control: no-cache on the request. From RFC 7234:
The no-cache request directive indicates that a cache MUST NOT use
a stored response to satisfy the request without successful
validation on the origin server.
Related
I have an ASP.Net Web Forms application. The blog post "CacheCow Series - Part 0: Getting started and caching basics" mentions that Output Caching uses HttpRuntime.Cache behind the scene -hence not HTTP caching. The request reaches the server and cached response is sent from the server (when the valid cached output is avaialble on the server). So the entire content is sent across the wire.
Is there any HTTP Caching available for ASP.Net Web Forms (where response content is not sent from the server, if cache is valid; but the client takes it from it's HTTP Cache after getting validity information (only) from the server)?
REFERENCES
Is page output cache stored in ASP.NET cache object?
Things Caches Do - Ryan Tomayko - 2ndscale.com/
Actually the OutputCache directive is used for both Client as Server side caching. When you set the Location of that directive to Any, Client, Downstream or ServerAndClient, proper cache response headers are set such that browsers or proxies won't request the same page again and serve the cached version of your page. But keep in mind that those clients are free to request those pages again.
Location options with their Cache-Control headers after setting directive:
<%# OutputCache Location="XXX" Duration="60" VaryByParam="none" %>
Client: private, max-age=60
Downstream: public, max-age=60
Any: public
ServerAndClient: private, max-age=60
Server: no-cache
No output directive: private
I have a JavaScript app that sends requests to REST API, the responses from server have cache headers (like ETag, cache-control, expires). Is caching of responses in browser automatic, or the app must implement some sort of mechanism to save the data?
An AJAX request is no different from a normal request - it's a GET/POST/HEAD/whatever request being sent by the browser, and it is handled as such. This is confirmed here:
The HTTP and Cache sub-systems of modern browsers are at a much lower level than Ajax’s XMLHttpRequest object. At this level, the browser doesn’t know or care about Ajax requests. It simply obeys the normal HTTP caching rules based on the response headers returned from the server.
As per the jQuery documentation, caches can also be invalidated in at least one usual way (appending a query string):
cache (default: true, false for dataType 'script' and 'jsonp')
Type: Boolean
If set to false, it will force requested pages not to be cached by the browser. Note: Setting cache to false will only work correctly with HEAD and GET requests. It works by appending "_={timestamp}" to the GET parameters. The parameter is not needed for other types of requests, except in IE8 when a POST is made to a URL that has already been requested by a GET.
So in short, given the same headers, AJAX responses are cached the same way as other requests.
Browser handles automatically cache of resources. What you seem to be asking about is the actuall response from the server.
You will need to set that up yourself in your application. You can do so both on front-end and backend.
Most of JS frameworks have cache control implemented, for example:
jQuery
$.ajaxSetup({
// Disable caching of AJAX responses
cache: false
});
AngularJS
$http.defaults.cache = false;
etc.
On backend it really depends on what language you using, what server engine etc.
Checkout Memcached for example
http://memcached.org/
As with anything with web development there are odd things here and there, for example some IE versions are automatically chaching requests and you have to add unique id to the url to prevent that.
From https://developer.mozilla.org/en-US/docs/AJAX/Getting_Started :
Note 2: If you do not set header Cache-Control: no-cache the browser will cache the response and never re-submit the request, making debugging "challenging." You can also append an always-diferent aditional GET parameter, like the timestamp or a random number (see bypassing the cache)
Browser should handle the cache automatically.
Check this article, there's only clear cache method for javascript.
https://developer.chrome.com/extensions/browsingData
If the server sends the response with any of cache headers browsers should respect it. there is no difference between resources or ajax requests.
also you can specify cache headers in your ajax calls to make it not use caches and fetch the whole response from the server.
Most modern browsers support browser caching because of the cache and expire headers
http://www.arlocarreon.com/blog/http/http-requests-and-your-browsers-cache/
There are 2 apects of an HTTP request that can qualify it for being cached:
- Specifc HTTP cache and expire headers
- A unique URL
Interesting read:
http://www.mobify.com/blog/beginners-guide-to-http-cache-headers/
I am building a web service that exclusively uses JSON for its request and response content (i.e., no form encoded payloads).
Is a web service vulnerable to CSRF attack if the following are true?
Any POST request without a top-level JSON object, e.g., {"foo":"bar"}, will be rejected with a 400. For example, a POST request with the content 42 would be thus rejected.
Any POST request with a content-type other than application/json will be rejected with a 400. For example, a POST request with content-type application/x-www-form-urlencoded would be thus rejected.
All GET requests will be Safe, and thus not modify any server-side data.
Clients are authenticated via a session cookie, which the web service gives them after they provide a correct username/password pair via a POST with JSON data, e.g. {"username":"user#example.com", "password":"my password"}.
Ancillary question: Are PUT and DELETE requests ever vulnerable to CSRF? I ask because it seems that most (all?) browsers disallow these methods in HTML forms.
EDIT: Added item #4.
EDIT: Lots of good comments and answers so far, but no one has offered a specific CSRF attack to which this web service is vulnerable.
Forging arbitrary CSRF requests with arbitrary media types is effectively only possible with XHR, because a form’s method is limited to GET and POST and a form’s POST message body is also limited to the three formats application/x-www-form-urlencoded, multipart/form-data, and text/plain. However, with the form data encoding text/plain it is still possible to forge requests containing valid JSON data.
So the only threat comes from XHR-based CSRF attacks. And those will only be successful if they are from the same origin, so basically from your own site somehow (e. g. XSS). Be careful not to mistake disabling CORS (i.e. not setting Access-Control-Allow-Origin: *) as a protection. CORS simply prevents clients from reading the response. The whole request is still sent and processed by the server.
Yes, it is possible. You can setup an attacker server which will send back a 307 redirect to the target server to the victim machine. You need to use flash to send the POST instead of using Form.
Reference: https://bugzilla.mozilla.org/show_bug.cgi?id=1436241
It also works on Chrome.
It is possible to do CSRF on JSON based Restful services using Ajax. I tested this on an application (using both Chrome and Firefox).
You have to change the contentType to text/plain and the dataType to JSON in order to avaoid a preflight request. Then you can send the request, but in order to send sessiondata, you need to set the withCredentials flag in your ajax request.
I discuss this in more detail here (references are included):
http://wsecblog.blogspot.be/2016/03/csrf-with-json-post-via-ajax.html
I have some doubts concerning point 3. Although it can be considered safe as it does not alter the data on the server side, the data can still be read, and the risk is that they can be stolen.
http://haacked.com/archive/2008/11/20/anatomy-of-a-subtle-json-vulnerability.aspx/
Is a web service vulnerable to CSRF attack if the following are true?
Yes. It's still HTTP.
Are PUT and DELETE requests ever vulnerable to CSRF?
Yes
it seems that most (all?) browsers disallow these methods in HTML forms
Do you think that a browser is the only way to make an HTTP request?
Can anyone break down what these two methods do at a HTTP level.
We are dealing with Akamai edge-caching and have been told that SetNoStore() will cause can exclusion so that (for example) form pages will always post back to the origin server. According to {guy} this sets the HTTP header:
Cache-Control: "no-cache, no-store"
As I was implementing this change to our forms I found SetNoServerCaching(). Well that seems to make a bit more sense semantically, and the documentation says "Explicitly denies caching of the document on the origin-server."
So I went down to the sea sea sea to see what I could see see see. I tried both of these methods and reviewed the headers in Firebug and Fiddler.
And from what I can tell, both these method set the exact same Http Header.
Can anyone explain if there are actual differences between these methods and if so, where are hiding in the http response?!
Theres a few differences,
SetNoStore, essentially stops the browser (and any network resource such as a CDN) from saving any part of the response or request, that includes saving to temp files. This will set the NO-STORE HTTP 1.1 header
SetNoServerCaching, will essentially stop the server from saving files, in ASP.NET There are several levels of caching that can happen, Data only, Partial Requests, Full Pages, and SQL Data. This call should stop the HTTP (Full and Partial) requests being saved on the server. This method should not set the cache-control headers or no-store or no cache.
There is also
Response.Cache.SetCacheability(HttpCacheability.Public);
Response.Cache.SetMaxAge(new TimeSpan(1, 0, 0));
as a possible way of setting cache, this will set the content-expires header.
For a CDN you probably want to set the content-expires header so that he CDN knows when to fetch new content, it if it gets a HIT. You probably don't want no-cache or no-store as this would cause a refetch on every HIT so essentially you are nullifying any benefit the CDN brings to you except they may have a faster backbone connection to the end user than your current ISP but that would be marginal.
Differnce between the two is
HttpCachePolicy.SetNoStore() or Response.Cache.SetNoStore:
Prevents the browser from caching the ASPX page.
HttpCachePolicy.SetNoServerCaching or Response.Cache.SetNoServerCaching:
Stops all origin-server caching for the current response. Explicitly denies caching of the document on the origin-server. Once set, all requests for the document are fully processed.
When these methods are invoked, caching cannot be reenabled for the current response.
In how many ways can an HTTP request be generated?
There are endless ways how you can create and from where you can send HTTP requests to a server. Actually your server has no idea, what the origin of such a request is (if it's AJAX or "regular" request, or sent from a console application or ...)
But there are HTTP methods (HTTP verbs) that (can) tell the server about the intent of the request: http://en.wikipedia.org/wiki/HTTP_Verbs#Request_methods
Also you can set headers in a request, for example the content-type or the accepted encoding: http://en.wikipedia.org/wiki/List_of_HTTP_header_fields
Most JavaScript libraries for example set the (non-standard) HTTP header X-Requested-With, so your application can differentiate between regular and ajax requests.
You see, it's even possible to set your own, non-standard headers. There are endless possible combinations...
HttpRequest is a C# class that wraps a petition sent by a client during a Web request.
There are many ways to generate it. The most usual one happens when your browser connects to an ASP.NET website.
You can, for example, create your own custom HttpRequest to petition a specific web page from a C# console application.
Are you trying to achieve something more specific?