Implications of Strict Transport Security (HSTS) max-age = 0 - http

When setting up HSTS in Cloudflare, I noticed that the default max-aged is set to 0.
To my understanding this default value kind of disables the HSTS. Which could be considered a misconfiguration and also be used to track users.
As I just found mentions of these issues and not clearer explanations, I wanted to ask:
Does setting max-age = 0 have the same effect as a constantly expiring max-age?
If 1 is true, what are the implications of constantly having a “first visit” HTTP requests before going over to HTTPS?
For 2 I am thinking of constant windows for MITM attacks. But would there be other risks? Implications like tracking are unclear and any explanation or further references would be great.

Based on my understanding of these extra resources about common mistakes, privacy, and general use of the header
Having a max-age = 0 will immediately expire the Strict-Transport-Security header, allowing but not forcing the traffic to go over HTTP.
This also helps with the 2nd part of my question as allowing HTTP access brings back numerous attack vectors like protocol downgrade, MITM, SSL-stripping, and potential privacy issues.
Note:
I am not marking my own answer as correct as I think this helps me understand better the implication of a misconfigured header, but not entirely.

Related

Shall I use the Content-Security-Policy HTTP header for a backend API?

We're implementing HSTS on our backend API and I stumbled upon the Content Security Policy (CSP) header. This header tells the browser where from resources such as images, video, stylesheet, scripts and so on can be downloaded.
Since a backend API won't really display things in a browser, what's the value of having this header set?
CSP is a technique designed to impair xss-attacks. That is, it is most useful in combination with serving hypermedia that relies on other resources being loaded with it. That is not exactly a scenario I would expect with an API. That is not to say you cannot use it. If there really is no interactive content in your responses, nothing could hold you from serving this header:
Content-Security-Policy: default-src 'none';
Going one step further, you could use CSP as some sort of makeshift Intrusion Detection System by setting report-uri in order to fetch incoming violation reports. That is well within the intended use but still a bit on the cheap.
In conclusion, it can theoretically improve the security of your API through little effort. Practically, the advantages may be slim to none. If you feel like it, there should be no harm in sending that header. You may gain more by e.g. suppressing MIME-type sniffing, though.
See also: The OWASP Secure Headers Project

Compress Entire Response, Except Cookies

How do I tell ASP.Net to compress the Response, but not the Cookies in the Response? So all the HTML output, but not the cookies.
Background: BREACH
The BREACH Attack remains unsolved. It works against TLS-secured, gzip-compressed responses, that contain a secret.
Any site where you're logged-in ought to have HTTPS-enabled, and is going to keep on sending back in its response a cookie that has the perfect secret for an attacker to target, since if they can get it they've got your token and can masquerade as you.
There's no satisfactory solution to this but one strong mitigation is to compress the secrets separately or not at all, from the rest of the response. Another is to include a CSRF token. For pages that display the result of submitting form data, CSRF token is fine since we need to do this anyway, and caching isn't so important performance-wise. But for static pages we need to be able to cache which makes the weight of CSRF token too much.
If we could just tell ASP.Net not to compress the cookie, the only secret in those responses, we'd be good to go:
Caching works on the static pages that need it
HTTPS and gzip get to be in play at the same time, with gzip switched off for just that little bit of the response
BREACH is dead
So, is this possible and if so how? I'm fine even with something like a HttpModule that does the gzip step so long as it doesn't get you a corrupt response.
Some kind of patch or module that just separates the gzip compression contexts (the main proposed solution to BREACH) would be even better, but that seems like asking too much.
Note that there seems to be conflict in the security community as to whether BREACH can be used to get at cookies/session tokens in the first place:
It can. It can't.

Default Expiration Time of Cacheable Resources

I was looking for some website optimization tips online and most of them had a common tip of specifying expiration of Cacheable resources. I have not yet specified the expiration. So what would be the default expiration duration the cacheable resources taking? Please help me.
I am sorry but I am not sure how this particular question relates to google bigquery. Maybe there is some information you didn't disclose?
update: after your comment and the change of category to HTTP I think I can already answer.
If you are not setting any expires or cache-control headers, then a well-behaved browser should issue a new GET request every time. There is not such thing as a default expiration.
Depending on your technology stack, it might be possible that your web server or your application server add default cache headers. To verify that, you can just open the page on firefox/chrome with the developer tools and inspect the headers. If you can't find any "Expires" or "Cache-control" headers in the response your page is sending, then you don't have any default expiration and you are not making use of caching.

Is domain attribute of a cookie sent back to a server?

If evil.example.com sets a cookie with a domain attribute set to .example.com, a browser will include this cookie in requests to foo.example.com.
The Tangled Web notes that for foo.example.com such cookie is largely indistinguishable from cookies set by foo.example.com. But according to the RFC, the domain attribute of a cookie should be sent to the server, which would make it possible for foo.example.com to distinguish and reject a cookie that was set by evil.example.com.
What is the state of current browsers implementations? Is domain sent back with cookies?
RFC 2109 and RFC 2965 were historical attempts to standardise the handling of cookies. Unfortunately they bore no resemblance to what browsers actually do, and should be completely ignored.
Real-world behaviour was primarily defined by the original Netscape cookie_spec, but this was highly deficient as a specification, which has resulting in a range of browser differences, around -
what date formats are accepted;
how cookies with the same name are handled when more than one match;
how non-ASCII characters work (or don't work);
quoting/escapes;
how domain matching is done.
RFC 6265 is an attempt to clean up this mess and definitively codify what browsers should aim to do. It doesn't say browsers should send domain or path, because no browser in history has ever done that.
Because you can't detect that a cookie comes from a parent domain(*), you have to take care with your hostnames to avoid overlapping domains if you want to keep your cookies separate - in particular for IE, where even if you don't set domain, a cookie set on example.com will always inherit into foo.example.com.
So: don't use a 'no-www' hostname for your site if you think you might ever want a subdomain with separate cookies in the future (that shouldn't be able to read sensitive cookies from its parent); and if you really need a completely separate cookie context, to prevent evil.example.com injecting cookie values into other example.com sites, then you have no choice but to use completely separate domain names.
An alternative that might be effective against some attack models would be to sign every cookie value you produce, for example using an HMAC.
*: there is kind of a way. Try deleting the cookie with the same domain and path settings as the cookie you want. If the cookie disappears when you do so, then it must have had those domain and path settings, so the original cookie was OK. If there is still a cookie there, it comes from somewhere else and you can ignore it. This is inconvenient, impractical to do without JavaScript, and not watertight because in principle the attacker could be deleting their injected cookies at the same time.
The current standard for cookies is RFC 6265. This version has simplified the Cookie: header. The browser just sends cookie name=value pairs, not attributes like the domain. See section 4.2 of the RFC for the specification of this header. Also, see section 8.6, Weak Integrity, where it discusses the fact that foo.example.com can set a cookie for .example.com, and bar.example.com won't be able to tell that it's not a cookie it set itself.
I would believe Zalewski's book and his https://code.google.com/p/browsersec/wiki/Main over any RFCs. Browsers' implementations of a number of HTTP features are notoriously messy and non-standard.

Web security -- HTTP-Location = HTTP-Referrer if outside domain? Why?

What is the point of doing this?
I want a reason why it's a good idea to send a person back to where they came from if the referrer is outside of the domain. I want to know why a handful of websites out there insist that this is good practice. It's easily exploitable, easily bypassed by anyone who's logging in with malicious intent, and just glares in my face as a useless "security" measure. I don't like to have my biased opinions on things without other input, so explain this one to me.
The request headers are only as trustworthy as your client, why would you use them as a means of validation?
There are three reasons why someone might want to do this. Checking the referer is a method of CSRF Prevention. A site may not want people to link to sensitive content and thus use this to bounce the browser back. It may also be to prevent spiders from accessing content that the publisher wishes to restrict.
I agree it is easy to bypass this referer restriction on your own browser using something like TamperData. It should also be noted that the browser's http request will not contain a referer if your coming from an https:// page going to an http:// page.

Resources