Shall I use the Content-Security-Policy HTTP header for a backend API? - http

We're implementing HSTS on our backend API and I stumbled upon the Content Security Policy (CSP) header. This header tells the browser where from resources such as images, video, stylesheet, scripts and so on can be downloaded.
Since a backend API won't really display things in a browser, what's the value of having this header set?

CSP is a technique designed to impair xss-attacks. That is, it is most useful in combination with serving hypermedia that relies on other resources being loaded with it. That is not exactly a scenario I would expect with an API. That is not to say you cannot use it. If there really is no interactive content in your responses, nothing could hold you from serving this header:
Content-Security-Policy: default-src 'none';
Going one step further, you could use CSP as some sort of makeshift Intrusion Detection System by setting report-uri in order to fetch incoming violation reports. That is well within the intended use but still a bit on the cheap.
In conclusion, it can theoretically improve the security of your API through little effort. Practically, the advantages may be slim to none. If you feel like it, there should be no harm in sending that header. You may gain more by e.g. suppressing MIME-type sniffing, though.
See also: The OWASP Secure Headers Project

Related

Why should we include CSP headers in the HTTP response for an API?

OWASP recommends to use Content-Security-Policy: frame-ancestors 'none' in API responses in order to avoid drag-and-drop style clickjacking attacks.
However, the CSP spec seems to indicate that after the HTML page is loaded any other CSP rules in the same context would be discarded without effect. Which makes sense in my mental model of how CSP works but if OWASP recommends it then I'm sure missing something.
Can anyone explain how can a CSP header in a XHR request improve security, after the fact that the HTML page is already loaded and the "main" CSP already evaluated? How that works in the browser?
how can a CSP header in a XHR request improve security, after the fact that the HTML page is already loaded and the "main" CSP already evaluated?
You are right, browsers use CSP from main page and just ignore the CSP header sent along with the XHR requests.
But you haven't considered the second scenario - the API response is open in the browser's address bar or in a frame. In this case, cookies will be available to the response page, and if XSS is detected in the API (as, for example, in the PyPI simple endpoint API), then the user's confidential data may be available to an attacker.
Therefore, it is better to protect API responses with the "default-src `none" policy, as well as 404/403/500, etc pages.
Can anyone explain how can a CSP header in a XHR request improve security, after the fact that the HTML page is already loaded and the "main" CSP already evaluated? How that works in the browser?
Adding to the correct answer by granty above, Frames are commonly used for CSP bypasses.
If a frame was allowed in a page (not blocked by the CSP), the frame has it's own CSP scope. So if you create some API for data - you don't want to allow it to be set as a frame as it could be used for bypassing the original CSP (for data exfiltration as an example).
So you can block this vulnerability by setting Content-Security-Policy: frame-ancestors 'none';, and then your API will refuse to be framed.
See this article on bypassing CSP for more info. The POC uses a creative hack:
frame=document.createElement(“iframe”);
frame.src=”/%2e%2e%2f”;
document.body.appendChild(frame);
which in turn triggers the NGINX error code page that does not have any CSP set. Many production CSPs are vulnerable to this issue.
Since not setting a CSP on a framed page would essentially default to no CSP (everything is open), the article suggests:
CSP headers should be present on all the pages, event on the error pages returned by the web-server
The frame-ancestors 'none' directive will indicate to the browser on page load that it should not be rendered in a frame (including frame, iframe, embed, object, and applet tags). In other words the policy does not allow it to be framed by any other pages.
The CSP header for the API or page is read at load. It is not something that happens after the fact. The "main" CSP isn't pertinent because it's the URI in the frame that's sending the CSP for itself over. The browser simply honors the frame-ancestor 'none' request by that URI
The frame-ancestors directive restricts the URLs which can embed the resource using frame, iframe, object, or embed. Resources can use this directive to avoid many UI Redressing [UISECURITY] attacks, by avoiding the risk of being embedded into potentially hostile contexts.
References
CSP frame-ancestors
Clickjacking Defense Cheat Sheet
Content Security Policy
Web Sec Directive Frame Ancestors

Chrome's Advanced REST Client doesn't mind CORS restrictions, how?

I developped a client and a server app. In the server's Web.config, I set the property
<add name="Access-Control-Allow-Origin" value="http://domain.tld:4031" />
And, indeed, when I try connecting with a client installed in a different location, I get rejected. But I do not get rejected when I use Chrome's Advanced REST Client from the very same location!
In the extensions, the header of the response indicates
Access-Control-Allow-Origin: http://www.domain.tld:4031
So how comes I still get a answer "200 OK", and the data I requested?
UPDATE:
I do not think this topic is enough of an answer : How does Google Chrome's Advanced REST client make cross domain POST requests?
My main concern is : how comes is it possible to "ask" for those extra permissions. I believe the client shouldn't be allowed to just decide which permission it receives. I thought it was up to the server only. What if I just "ask" for extra permissions to access your data on your computer? It doesn't make sense to me...
The reason that the REST client (or really the browser) is able to bypass the CORS restriction is that it is a client side protection. It isn't the responsibility of the server to provide this protection but it is a feature implemented by most modern browser vendors to protect their users from XSS-hazards.
The following quote from the Wikipedia CORS page sums it up quite good
"Although some validation and authorization can be performed by the server, it is generally the browser's responsibility to support these headers and respect the restrictions they impose." - Wikipedia
You could of course, like the quote imposes, do some server side validation on your own. The "Access-Control-Allow-Origin" header however is more of a indication to the browser that is is okay for the browser to allow the specified origins. It's then up to the browser to decide if it want's to honor this header. In previous versions of for example chrome there actually existed a flag to turn off the same origin policy all together.

What is the point of the Access-Control-Allow-Origin http header?

I have difficulties in seeing the point of the Access-Control-Allow-Origin http header.
I thought that if a client (browser) gets a "no" from a server once, than it will not send any further requests. But chrome and firefox keep sending requests.
Could anyone tell me a real life example where a header like this makes sense?
thanks!
The Access-Control-Allow-Origin header should contain a list of origins which are "allowed" to access the resource.
Thus, determining which domains can make requests to your server for resources.
For example, sending back a header of Access-Control-Allow-Origin: * would allow all sites to access the requested resource.
On the other hand, sending back Access-Control-Allow-Origin: http://foo.example.com will allow access only to http://foo.example.com.
There's some more information on this over at the Mozilla Developer Site
For example
Let's suppose we have a URL on our own domain that returns a JSON collection of Music Albums by Artist. It might look like this:
http://ourdomain.com/GetAlbumsByArtist/MichaelJackson.json
We might use some AJAX on our website to get this JSON data and then display it on our website.
But what if someone from another site wishes to use our JSON object for themselves? Perhaps we have another website http://subdomain.ourdomain.com which we own and would like to use our feed from ourdomain.com.
Traditionally we can't make cross-domain requests for this data.
By specifying other domains that are allowed access to our resource, we now open the doors to cross-domain requests.
CORS implements a two-part security view of cross-origin. The problem it is trying to solve is that there are many servers sitting out there on the public internet written by people who either (a) assumed that no browser would ever allow a cross-origin request, or (b) didn't think about it at all.
So, some people want to permit cross-origin communications, but the browser-builders do not feel that they can just unlock browsers and suddenly leave all these websites exposed. To avoid this, they invented a two-part structure. Before a browser will permit a cross-origin interaction with a server, that server has to specifically indicate that it is willing to allow cross-origin access. In the simple cases, that's Access-Control-Allow-Origin. In more complex cases, it's the full preflight mechanism.
It's still true that servers have to implement appropriate resource access control on their resources. CORS is just there to allow the server to indicate to browsers that it is aware of all the issues.

HTTP ETag reproduction

Having recently discovered problems relating to the HTTP ETag and our CDN I've tried to capture some in Fiddler for well known sites. However it appears that whatever combination of browser / website I use I'm not seeing any pass by.
Is there any reason for this? Can you suggest a combination in which I can see them? Perhaps they're not widely used anymore?
They are definitely widely used, I've used it myself often. The most common usecase is conditional requests (always check if there's new content, but only send the content back from the server if it has changed).
However, Last-Modified can also do this instead and it's not needed if you don't force the browser to always check for new content (no must-revalidate).
The reason your CDN isn't using them is one of the following:
They are using Last-Modified instead
They don't force a revalidation and set an expiry time well in the future
They couldn't determine a ETag for a particular piece of content
Misconfiguration

Custom HTTP Headers with old proxies

Is it true that some old proxies/caches will not honor some custom HTTP headers? If so, can you prove it with sections from the HTTP spec or some other information online?
I'm designing a REST API interface. For versioning I'm debating whether to use version as a part of the URL like (/path1/path2/v1 OR /path1/path2?ver=1) OR to use a custom Accepts X-Version header.
I was just reading in O'Reilly's Even Faster Websites about how mainly internet security software, but really anything that has to check the contents of a page, might filter the Accept-Encoding header in order to reduce the CPU time used decompressing and reading the file. The books cites that about 15% of user have this issue.
However, I see no reason why other, custom headers would be filtered. On the other hand, there also isn't really any reason to send it as a header and not with GET is there? It's not really part of the HTTP protocol, it's just your API.
Edit: Also, see the actual section of the book I mention.

Resources