Lax vs Strict for Set-Cookie HTTP header and CSRF - http

I was just reading https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie:
Lax: The cookie is not sent on cross-site requests, such as calls to
load images or frames, but is sent when a user is navigating to the
origin site from an external site (e.g. if following a link). This is
the default behavior if the SameSite attribute is not specified.
If this is the default, then doesn't this mean CSRF attacks can't happen?
If someone loads a malicious website that runs Javascript in the background to make a simple POST request to a website the victim is currently logged into, then the default behaviour is that the cookie won't be sent, right?
Also, why would someone choose to use Strict over Lax?
Why would you ever want to prevent a user's browser sending a cookie to the origin website when navigating to that website, which is what Strict does?

CSRF attacks are still possible when SameSite is Lax. It prevents the cross-site POST attack you mentioned, but if a website triggers an unsafe operation with a GET request then it would still be possible. For example, many sites currently trigger a logout with a GET request, so it would be trivial for an attacker to log a user out of their session.
The standard addresses this directly:
Lax enforcement provides reasonable defense in depth against CSRF
attacks that rely on unsafe HTTP methods (like "POST"), but does not
offer a robust defense against CSRF as a general category of attack:
Attackers can still pop up new windows or trigger top-level
navigations in order to create a "same-site" request (as
described in section 5.2.1), which is only a speedbump along the
road to exploitation.
Features like <link rel='prerender'> can be
exploited to create "same-site" requests without the risk of user
detection.
Given that, the reason why someone would use Strict is straightforward: it prevents a broader class of CSRF attacks. There's a tradeoff, of course, since it prevents some ways of using your site, but if those use cases aren't important to you then the tradeoff might be justified.

Related

How to exploit HTTP header XSS vulnerability?

Let's say that a page is just printing the value of the HTTP 'referer' header with no escaping. So the page is vulnerable to an XSS attack, i.e. an attacker can craft a GET request with a referer header containing something like <script>alert('xss');</script>.
But how can you actually use this to attack a target? How can the attacker make the target issue that specific request with that specific header?
This sounds like a standard reflected XSS attack.
In reflected XSS attacks, the attacker needs the victim to visit some site which in some way is under the attacker's control. Even if this is just a forum where an attacker can post a link in the hope somebody will follow it.
In the case of a reflected XSS attack with the referer header, then the attacker could redirect the user from the forum to a page on the attacker's domain.
e.g.
http://evil.example.com/?<script>alert(123)>
This page in turn redirects to the following target page in a way that preserves referer.
http://victim.example.org/vulnerable_xss_page.php
Because it is showing the referer header on this page without the proper escaping, http://evil.example.com/?<script>alert(123)> gets output within the HTML source, executing the alert. Note this works in Internet Explorer only.
Other browsers will automatically encode the URL rendering
%3cscript%3ealert%28123%29%3c/script%3e
instead which is safe.
I can think of a few different attacks, maybe there are more which then others will hopefully add. :)
If your XSS is just some header value reflected in the response unencoded, I would say that's less of a risk compared to stored. There may be factors to consider though. For example if it's a header that the browser adds and can be set in the browser (like the user agent), an attacker may get access to a client computer, change the user agent, and then let a normal user use the website, now with the attacker's javascript injected. Another example that comes to mind is a website may display the url that redirected you there (referer) - in this case the attacker only has to link to the vulnerable application from his carefully crafted url. These are kind of edge cases though.
If it's stored, that's more straightforward. Consider an application that logs user access with all request headers, and let's suppose there is an internal application for admins that they use to inspect logs. If this log viewer application is web based and vulnerable, any javascript from any request header could be run in the admin context. Obviously this is just one example, it doesn't need to be blind of course.
Cache poisoning may also help with exploiting a header XSS.
Another thing I can think of is browser plugins. Flash is less prevalent now (thankfully), but with different versions of Flash you could set different request headers on your requests. What exactly you can and cannot set is a mess and very confusing across Flash plugin versions.
So there are several attacks, and it is necessary to treat all headers as user input and encode them accordingly.
Exploitation of xss at referrer header is almost like a traditional reflected xss, Just an additional point to make is "Attacker's website redirects to victim website and hence referrer header with required javascript will be appended to the victim website request".
Here One essential point that needs to be discussed is Why only with IE one can exploit this vulnerability why not with other browsers?
Traditional answer for this question is 'Chrome and firefox automatically encodes URL parameters and hence XSS is not possible..' But interesting thing here is when we have n number of bypasses for traditional xss bypasses. why can't we have bypasses for this scenario.
Yes.. We can bypass with following payload which is same way to bypass HTML validation in traditional payload.
http://evil.example.com/?alert(1)//cctixn1f .
Here the response could be something like this:
The link on the
referring
page seems to be wrong or outdated.
Response End
If victim clicks on referring page, alert will be generated..
Bottomline: Not just only IE, XSS can be possible even in Mozilla and Firefox when referrer is being used as part of href tag..

Compress Entire Response, Except Cookies

How do I tell ASP.Net to compress the Response, but not the Cookies in the Response? So all the HTML output, but not the cookies.
Background: BREACH
The BREACH Attack remains unsolved. It works against TLS-secured, gzip-compressed responses, that contain a secret.
Any site where you're logged-in ought to have HTTPS-enabled, and is going to keep on sending back in its response a cookie that has the perfect secret for an attacker to target, since if they can get it they've got your token and can masquerade as you.
There's no satisfactory solution to this but one strong mitigation is to compress the secrets separately or not at all, from the rest of the response. Another is to include a CSRF token. For pages that display the result of submitting form data, CSRF token is fine since we need to do this anyway, and caching isn't so important performance-wise. But for static pages we need to be able to cache which makes the weight of CSRF token too much.
If we could just tell ASP.Net not to compress the cookie, the only secret in those responses, we'd be good to go:
Caching works on the static pages that need it
HTTPS and gzip get to be in play at the same time, with gzip switched off for just that little bit of the response
BREACH is dead
So, is this possible and if so how? I'm fine even with something like a HttpModule that does the gzip step so long as it doesn't get you a corrupt response.
Some kind of patch or module that just separates the gzip compression contexts (the main proposed solution to BREACH) would be even better, but that seems like asking too much.
Note that there seems to be conflict in the security community as to whether BREACH can be used to get at cookies/session tokens in the first place:
It can. It can't.

Should mutations always use POST?

Philosophically, I am accustomed to always using GET for HTTP requests that do not alter state, and POST for requests that do. However, lately I have run into some difficulties with this that have caused me to make exceptions. I was curious if there is any non-philosophical downside to using the wrong HTTP verbs, such as security concerns like cross-site attacks.
Exception #1
I wanted to trigger a download of a requested list of files dynamically packaged into an archive. However, the list of files could grow so large that, when encoded as querystring parameters in the URL, they exceeded the url length limit in Internet Explorer. To work around this, I ended up triggering the download with a POST.
Exception #2
There is a button that is always displayed, regardless of whether you are logged in or not, but it can only alter state if you are logged in. If you press it when you are not logged in, you are taken to the login page with a querystring parameter indicating the place you were intending to go next. When you log in, it redirects you there to complete your action. However, the redirect can only generate a GET, not a POST. So we have allowed GETs to alter state in this situation.
Are there any exploits or downsides to these exceptions? Do these allow any cross-site request forgery scenarios that cannot be prevented by checking the referer header?
Answer to question in subject: Yes
Exception #1: A GET request can have a body. You don't have to put everything in the URL
Exception #2: Alter the form to use GET when not logged in and POST if logged in.
Using referer is not recommended. There have been all sorts of workarounds, and some corporate software strip it for privacy concerns.
I highly recommend a token based approach to CSRF-mitigation.

Is domain attribute of a cookie sent back to a server?

If evil.example.com sets a cookie with a domain attribute set to .example.com, a browser will include this cookie in requests to foo.example.com.
The Tangled Web notes that for foo.example.com such cookie is largely indistinguishable from cookies set by foo.example.com. But according to the RFC, the domain attribute of a cookie should be sent to the server, which would make it possible for foo.example.com to distinguish and reject a cookie that was set by evil.example.com.
What is the state of current browsers implementations? Is domain sent back with cookies?
RFC 2109 and RFC 2965 were historical attempts to standardise the handling of cookies. Unfortunately they bore no resemblance to what browsers actually do, and should be completely ignored.
Real-world behaviour was primarily defined by the original Netscape cookie_spec, but this was highly deficient as a specification, which has resulting in a range of browser differences, around -
what date formats are accepted;
how cookies with the same name are handled when more than one match;
how non-ASCII characters work (or don't work);
quoting/escapes;
how domain matching is done.
RFC 6265 is an attempt to clean up this mess and definitively codify what browsers should aim to do. It doesn't say browsers should send domain or path, because no browser in history has ever done that.
Because you can't detect that a cookie comes from a parent domain(*), you have to take care with your hostnames to avoid overlapping domains if you want to keep your cookies separate - in particular for IE, where even if you don't set domain, a cookie set on example.com will always inherit into foo.example.com.
So: don't use a 'no-www' hostname for your site if you think you might ever want a subdomain with separate cookies in the future (that shouldn't be able to read sensitive cookies from its parent); and if you really need a completely separate cookie context, to prevent evil.example.com injecting cookie values into other example.com sites, then you have no choice but to use completely separate domain names.
An alternative that might be effective against some attack models would be to sign every cookie value you produce, for example using an HMAC.
*: there is kind of a way. Try deleting the cookie with the same domain and path settings as the cookie you want. If the cookie disappears when you do so, then it must have had those domain and path settings, so the original cookie was OK. If there is still a cookie there, it comes from somewhere else and you can ignore it. This is inconvenient, impractical to do without JavaScript, and not watertight because in principle the attacker could be deleting their injected cookies at the same time.
The current standard for cookies is RFC 6265. This version has simplified the Cookie: header. The browser just sends cookie name=value pairs, not attributes like the domain. See section 4.2 of the RFC for the specification of this header. Also, see section 8.6, Weak Integrity, where it discusses the fact that foo.example.com can set a cookie for .example.com, and bar.example.com won't be able to tell that it's not a cookie it set itself.
I would believe Zalewski's book and his https://code.google.com/p/browsersec/wiki/Main over any RFCs. Browsers' implementations of a number of HTTP features are notoriously messy and non-standard.

How to work around POST being changed to GET on 302 redirect?

Some parts of my website are only accessible via HTTPS (not whole website - security vs performance compromise) and that HTTPS is enforced with a 302 redirect on requests to the secure part if they are sent over plain HTTP.
The problem is for all major browsers if you do a 302 redirect on POST it will be automatically switched to GET (afaik this should only happen on 303, but nobody seems to care). Additional issue is that all POST data is lost.
So what are my options here other than accepting POSTs to secure site over HTTP and redirecting afterwards or changing loads of code to make sure all posts to secure part of website go over HTTPS from the beginning?
You are right, this is the only reliable way. The POST request should go over https connection from the very beginning. Moreover, It is recommended that the form, that leads to such POST is also loaded over https. Usually the first form after that you have the https connection is a login form. All browsers applying different security restrictions to the pages loaded over http and over https. So, this lowers the risk to execute some malicious script in context that own some sensible data.
I think that's what 307 is for. RFC2616 does say:
If the 307 status code is received in response to a request other
than GET or HEAD, the user agent MUST NOT automatically redirect the
request unless it can be confirmed by the user, since this might
change the conditions under which the request was issued.
but it says the same thing about 302 and we know what happens there.
Unfortunately, you have a bigger problem than browsers not dealing with response codes the way the RFC's say, and that has to do with how HTTP works. Simplified, the process looks like this:
The browser sends the request
The browser indicates it has sent the entire request
The server sends the response
Presumably your users are sending some sensitive information in their post and this is why you want them to use encryption. However, if you send a redirect response (step 3) to the user's unencrypted POST (step 1), the user has already sent all of the sensitive information out unencrypted.
It could be that you don't consider the information the user sends that sensitive, and only consider the response that you send to be sensitive. However, this turns out not to make sense. Sensitive information should be available only to certain individuals, and the information used to authenticate the user is necessarily part of the request, which means your response is now available to anyone. So, if the response is sensitive, the request is sensitive as well.
It seems that you are going to want to change lots of code to make sure all secure posts use HTTPS (you probably should have written them that way in the first place). You might also want to reconsider your decision to only host some of your website on HTTPS. Are you sure your infrastructure can't handle using all HTTPS connections? I suspect that it can. If not, it's probably time for an upgrade.

Resources