How do I tell ASP.Net to compress the Response, but not the Cookies in the Response? So all the HTML output, but not the cookies.
Background: BREACH
The BREACH Attack remains unsolved. It works against TLS-secured, gzip-compressed responses, that contain a secret.
Any site where you're logged-in ought to have HTTPS-enabled, and is going to keep on sending back in its response a cookie that has the perfect secret for an attacker to target, since if they can get it they've got your token and can masquerade as you.
There's no satisfactory solution to this but one strong mitigation is to compress the secrets separately or not at all, from the rest of the response. Another is to include a CSRF token. For pages that display the result of submitting form data, CSRF token is fine since we need to do this anyway, and caching isn't so important performance-wise. But for static pages we need to be able to cache which makes the weight of CSRF token too much.
If we could just tell ASP.Net not to compress the cookie, the only secret in those responses, we'd be good to go:
Caching works on the static pages that need it
HTTPS and gzip get to be in play at the same time, with gzip switched off for just that little bit of the response
BREACH is dead
So, is this possible and if so how? I'm fine even with something like a HttpModule that does the gzip step so long as it doesn't get you a corrupt response.
Some kind of patch or module that just separates the gzip compression contexts (the main proposed solution to BREACH) would be even better, but that seems like asking too much.
Note that there seems to be conflict in the security community as to whether BREACH can be used to get at cookies/session tokens in the first place:
It can. It can't.
Related
I was just reading https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie:
Lax: The cookie is not sent on cross-site requests, such as calls to
load images or frames, but is sent when a user is navigating to the
origin site from an external site (e.g. if following a link). This is
the default behavior if the SameSite attribute is not specified.
If this is the default, then doesn't this mean CSRF attacks can't happen?
If someone loads a malicious website that runs Javascript in the background to make a simple POST request to a website the victim is currently logged into, then the default behaviour is that the cookie won't be sent, right?
Also, why would someone choose to use Strict over Lax?
Why would you ever want to prevent a user's browser sending a cookie to the origin website when navigating to that website, which is what Strict does?
CSRF attacks are still possible when SameSite is Lax. It prevents the cross-site POST attack you mentioned, but if a website triggers an unsafe operation with a GET request then it would still be possible. For example, many sites currently trigger a logout with a GET request, so it would be trivial for an attacker to log a user out of their session.
The standard addresses this directly:
Lax enforcement provides reasonable defense in depth against CSRF
attacks that rely on unsafe HTTP methods (like "POST"), but does not
offer a robust defense against CSRF as a general category of attack:
Attackers can still pop up new windows or trigger top-level
navigations in order to create a "same-site" request (as
described in section 5.2.1), which is only a speedbump along the
road to exploitation.
Features like <link rel='prerender'> can be
exploited to create "same-site" requests without the risk of user
detection.
Given that, the reason why someone would use Strict is straightforward: it prevents a broader class of CSRF attacks. There's a tradeoff, of course, since it prevents some ways of using your site, but if those use cases aren't important to you then the tradeoff might be justified.
I develop software which stores files in directories with random names to prevent unauthorized users to download a file.
The first thing we need about this is to store them in a separate top-level domain (to prevent cookie theft).
The second danger is HTTP referer which may reveal the name of the secret directory.
My experiments with Chrome browser shows that HTTP referer is sent only when I click a link in my (secret) file. So the trouble is limited only to files which may contain links (in Chrome HTML and PDF). Can I rely on this behavior (not sending the referer is the next page is opened not from a current (secret) page link but with some other method such as entering the URL directly) for all browsers?
So the problem was limited only to HTML and PDF files. But it is not a complete security solution.
I suspect that we can fully solve this problem by adding Content-Disposition: attachment when serving all our secret files. Will it prevent the HTTP referer?
Also note that I am going to use HTTPS for a man-in-the-middle not to be able to download our secret files.
You can use the Referrer-Policy header to try to control referer behaviour. Please take note that this requires clients to implement this.
Instead of trying to conceal the file location, may I suggest you implement proper authentication and authorization handling?
I agree that Referrer-Policy is your best first step, but as DaSourcerer notes, it is not universally implemented on browsers you may support.
A fully server-side solution is as follows:
User connects to .../<secret>
Server generates a one-time token and redirects to .../<token>
Server provides document and invalidate token
Now the referer will point to .../<token>, which is no longer valid. This has usability trade-offs, however:
Reloading the page will not work (though you may be able to address this with a cookie or session)
Users cannot share URL from URL bar, since it's technically invalid (in some cases that could be a minor benefit)
You may be able to get the same basic benefits without the usability trade-offs by doing the same thing with an IFRAME rather than redirecting. I'm not certain how IFRAME influences Referer.
This entire solution is basically just Referer masking done proactively. If you can rewrite the links in the document, then you could instead use Referer masking on the way out. (i.e. rewrite all the links so that they point to https://yoursite.com/redirect/....) Since you mention PDF, I'm assuming that this would be challenging (or that you otherwise do not want to rewrite the document).
Let's say that a page is just printing the value of the HTTP 'referer' header with no escaping. So the page is vulnerable to an XSS attack, i.e. an attacker can craft a GET request with a referer header containing something like <script>alert('xss');</script>.
But how can you actually use this to attack a target? How can the attacker make the target issue that specific request with that specific header?
This sounds like a standard reflected XSS attack.
In reflected XSS attacks, the attacker needs the victim to visit some site which in some way is under the attacker's control. Even if this is just a forum where an attacker can post a link in the hope somebody will follow it.
In the case of a reflected XSS attack with the referer header, then the attacker could redirect the user from the forum to a page on the attacker's domain.
e.g.
http://evil.example.com/?<script>alert(123)>
This page in turn redirects to the following target page in a way that preserves referer.
http://victim.example.org/vulnerable_xss_page.php
Because it is showing the referer header on this page without the proper escaping, http://evil.example.com/?<script>alert(123)> gets output within the HTML source, executing the alert. Note this works in Internet Explorer only.
Other browsers will automatically encode the URL rendering
%3cscript%3ealert%28123%29%3c/script%3e
instead which is safe.
I can think of a few different attacks, maybe there are more which then others will hopefully add. :)
If your XSS is just some header value reflected in the response unencoded, I would say that's less of a risk compared to stored. There may be factors to consider though. For example if it's a header that the browser adds and can be set in the browser (like the user agent), an attacker may get access to a client computer, change the user agent, and then let a normal user use the website, now with the attacker's javascript injected. Another example that comes to mind is a website may display the url that redirected you there (referer) - in this case the attacker only has to link to the vulnerable application from his carefully crafted url. These are kind of edge cases though.
If it's stored, that's more straightforward. Consider an application that logs user access with all request headers, and let's suppose there is an internal application for admins that they use to inspect logs. If this log viewer application is web based and vulnerable, any javascript from any request header could be run in the admin context. Obviously this is just one example, it doesn't need to be blind of course.
Cache poisoning may also help with exploiting a header XSS.
Another thing I can think of is browser plugins. Flash is less prevalent now (thankfully), but with different versions of Flash you could set different request headers on your requests. What exactly you can and cannot set is a mess and very confusing across Flash plugin versions.
So there are several attacks, and it is necessary to treat all headers as user input and encode them accordingly.
Exploitation of xss at referrer header is almost like a traditional reflected xss, Just an additional point to make is "Attacker's website redirects to victim website and hence referrer header with required javascript will be appended to the victim website request".
Here One essential point that needs to be discussed is Why only with IE one can exploit this vulnerability why not with other browsers?
Traditional answer for this question is 'Chrome and firefox automatically encodes URL parameters and hence XSS is not possible..' But interesting thing here is when we have n number of bypasses for traditional xss bypasses. why can't we have bypasses for this scenario.
Yes.. We can bypass with following payload which is same way to bypass HTML validation in traditional payload.
http://evil.example.com/?alert(1)//cctixn1f .
Here the response could be something like this:
The link on the
referring
page seems to be wrong or outdated.
Response End
If victim clicks on referring page, alert will be generated..
Bottomline: Not just only IE, XSS can be possible even in Mozilla and Firefox when referrer is being used as part of href tag..
Some parts of my website are only accessible via HTTPS (not whole website - security vs performance compromise) and that HTTPS is enforced with a 302 redirect on requests to the secure part if they are sent over plain HTTP.
The problem is for all major browsers if you do a 302 redirect on POST it will be automatically switched to GET (afaik this should only happen on 303, but nobody seems to care). Additional issue is that all POST data is lost.
So what are my options here other than accepting POSTs to secure site over HTTP and redirecting afterwards or changing loads of code to make sure all posts to secure part of website go over HTTPS from the beginning?
You are right, this is the only reliable way. The POST request should go over https connection from the very beginning. Moreover, It is recommended that the form, that leads to such POST is also loaded over https. Usually the first form after that you have the https connection is a login form. All browsers applying different security restrictions to the pages loaded over http and over https. So, this lowers the risk to execute some malicious script in context that own some sensible data.
I think that's what 307 is for. RFC2616 does say:
If the 307 status code is received in response to a request other
than GET or HEAD, the user agent MUST NOT automatically redirect the
request unless it can be confirmed by the user, since this might
change the conditions under which the request was issued.
but it says the same thing about 302 and we know what happens there.
Unfortunately, you have a bigger problem than browsers not dealing with response codes the way the RFC's say, and that has to do with how HTTP works. Simplified, the process looks like this:
The browser sends the request
The browser indicates it has sent the entire request
The server sends the response
Presumably your users are sending some sensitive information in their post and this is why you want them to use encryption. However, if you send a redirect response (step 3) to the user's unencrypted POST (step 1), the user has already sent all of the sensitive information out unencrypted.
It could be that you don't consider the information the user sends that sensitive, and only consider the response that you send to be sensitive. However, this turns out not to make sense. Sensitive information should be available only to certain individuals, and the information used to authenticate the user is necessarily part of the request, which means your response is now available to anyone. So, if the response is sensitive, the request is sensitive as well.
It seems that you are going to want to change lots of code to make sure all secure posts use HTTPS (you probably should have written them that way in the first place). You might also want to reconsider your decision to only host some of your website on HTTPS. Are you sure your infrastructure can't handle using all HTTPS connections? I suspect that it can. If not, it's probably time for an upgrade.
What is the point of doing this?
I want a reason why it's a good idea to send a person back to where they came from if the referrer is outside of the domain. I want to know why a handful of websites out there insist that this is good practice. It's easily exploitable, easily bypassed by anyone who's logging in with malicious intent, and just glares in my face as a useless "security" measure. I don't like to have my biased opinions on things without other input, so explain this one to me.
The request headers are only as trustworthy as your client, why would you use them as a means of validation?
There are three reasons why someone might want to do this. Checking the referer is a method of CSRF Prevention. A site may not want people to link to sensitive content and thus use this to bounce the browser back. It may also be to prevent spiders from accessing content that the publisher wishes to restrict.
I agree it is easy to bypass this referer restriction on your own browser using something like TamperData. It should also be noted that the browser's http request will not contain a referer if your coming from an https:// page going to an http:// page.