why google.com not set includeSubDomains directive on http strict transport security response header ?
google.com HSTS resonse header is something like:
Strict-Transport-Security:max-age=86400
Why not
Strict-Transport-Security: max-age=86400; includeSubDomains
The second one should be more secure from my side, is that right ??
It is static
Using Google Chrome, you can go to chrome://net-internals/#hsts and Query different domains. Entering google.com and clicking on Query will bring back a list of results.
In that result list, you can see that static_sts_include_subdomains is true and dynamic_sts_include_subdomains is false. This is better than setting it dynamically, which is vulnerable to an attack whereby the very first time the browser requests the domain with http:// (not https://) an adversary intercepts the communication. In order to overcome this weakness we have the static mode which allows for hard-coding HSTS records directly into the browser's source.
Hope this helps
Yes it is more secure to use includeSubDomains.
For example an attacker could set up and use a subdomain (e.g. hacked.google.com) and access that over HTTP and and use that to access or override cookies set at top level domain (google.com) even though that top level domain is secured with HSTS. Of course if you're using Secure attribute on your cookies then this might not be an issue but this is just one example of why to use includeSubDomains.
You cannot set the includeSubDomains attrribute unless all subdomains are available on HTTPS (obviously). So if Google had blog.google.com and had still not upgraded this to HTTPS then that might explain why they would not use includeSubDomains at the top level domain.
However, as #Horkine rightly points out, Google preloads their domains into the Chrome browser code (and that preload list is also picked up by other browsers) so this HTTP header isn't used for modern browsers.
Weirldy there are some inconsitencies between the preloaded version and the HTTP HTTP Headers version. That is very odd to be honest. Incidentally these discrepancies also breaks their own rules for preloading.
www.google.com
The preloaded version for www.google.com does have the includeSubDomains attribute.
The Strict-Transport-Security HTTP Header version does not have the includeSubDomains attribute but not the preload attribute.
google.com
The preloaded version for google.com does have the includeSubDomains attribute
No Strict-Transport-Security HTTP header is published.
Why these inconsistencies? We can only guess:
It could be that they never got round to updating their HTTP Header after finishing the upgrade to all their sites?
Or it could be because some of the apps do browser detection for older browsers (which do not include the preload code, but does understand the HSTS header) and redirects older browsers to http://old.google.com for some reason?
Or it could be region specific?
All of it is a guess really, as only Google can answer and I'm not aware of any documentation of what they use on their own site or why.
But, yes, to answer you last question it is more secure to include includeSubDomains (if possible) and it is even more secure to preload (though not without risks unless you are 100% confident you are only using HTTPS).
Related
I have this website set up:
http://website1.com/ - returns 301 Moved Permanently and redirects to http://www.website1.com/.
http://www.website1.com/ - returns 301 Moved Permanently and redirects to https://www.website2.com/.
https://www.website2.com/ - returns 200 OK and has this in the response:
strict-transport-security: max-age=31536000; includeSubDomains
I have this subdomain running a web app:
https://subdomain.website1.com/
This also has the following header in the response:
Strict-Transport-Security: max-age=31536000; includeSubDomains
I want to have preload functionality for all sub domains of website1.com/.
However, I get the following errors when checking eligibility:
Error: No HSTS header
Response error: No HSTS header is present on the response.
Error: HTTP redirects to www first
http://website1.com (HTTP) should immediately redirect to https://website1.com (HTTPS) before adding the www subdomain.
Right now, the first redirect is to http://www.website1.com/.
The extra redirect is required to ensure that any browser which supports HSTS will record the HSTS entry for the top level domain, not just the subdomain.
The first error is easy, I can just add the HSTS header.
But why does it matter that there's a redirect?
All I want is for http://subdomain.website1.com/ to make an internal redirect to https://subdomain.website1.com/, and for http://website1.com/ to internally redirect to https://website1.com/.
Can't http://website1.com make an internal redirect to https://website1.com, regardless of the fact that it redirects to www.website1.com/?
I have this website set up: http://website1.com/ - returns 301 Moved Permanently and redirects to http://www.website1.com/.
This is your issue. http://website1.com should redirect to https://website1.com then on to https://www.website1.com.
This way the top level website1.com domain will pick up the HSTS header and protect itself and all sub domains (assuming it has includeSubDomains attribute set - which is a pre-requisite for preloading).
Without switching to HTTPS first, or if you skip straight to https://www.website1.com then the browser will never see the HSTS header on the top level domain and so know that it (and all sub domains) should be protected by HSTS. This is 1) less secure and 2) more risky when preloading as maybe you still have a non-HTTPS site (e.g. http://blog.website1.com or http://intranet.website1.com). By forcing you to set this up before you preload it, will hopefully surface those issues, when it’s still possible to reverse HSTS (which is basically impossible after its preloaded into browser’s source code - at least for many months anyway).
And the risk of accidentally locking out a non-HTTPS subdomain with preload is one reason I’ve argued in the past that preload is potentially more risky than useful, and overkill for most sites. But with HTTPS becoming the norm, I’m less against it now. Still think it’s a bit overkill except for high target sites though.
Btw for the first error, make sure HSTS header is included on 301 redirects. For Apache for example you need always set rather than just set as explained here: https://stackoverflow.com/a/48103216/2144578
I am running WordPress 5.3.2 on Apache/2.4.29 (Ubuntu) 18.04 on a Digital Ocean droplet.
My client requested the following:
All cookies transferred over an encrypted session, in particular session cookies, should be marked as 'Secure' and all session information should be transmitted over HTTPS.
The HttpOnly flag should also be set within the cookie
So, I defined the following in the virtual host:
Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure;SameSite=Strict
I then tested the header response and could see my Set-Cookie defined.
The problem is, I now can't login to WordPress. WordPress says:
ERROR: cookies are blocked or not supported by your browser. You must
enable cookies to use WordPress.
What am I doing wrong?
Strict is probably more restrictive than you want, as this will prevent cookies from being sent on initial cross-site navigations, e.g. if I emailed you a link to a page on your blog, when you first followed that link, the SameSite=Strict cookies would not be sent and it might appear as if you were not logged in.
SameSite=Lax is a better default here. Then I would explicitly look at setting SameSite=Strict or SameSite=None on individual cookies where you know the level of access required.
The HttpOnly attribute is also blanket preventing all of your server-side set cookies from being read by JavaScript. You may well have functionality on your page that requires this.
Finally, a blanket approach here is probably overkill - as it looks as if you will be appending this snippet to every outgoing cookie header, even the ones that already include those attributes. This is likely to cause some unpredictable behaviour. I would either do this on a specific allow-list basis by checking for explicit cookie names or I would alter the regex to only set this if those attributes are missing.
A late answer. But if it helps someone:
Put these values in php.ini
session.cookie_httponly = 1
session.cookie_secure = 1
Of course you should have a valid https certificate.
I have a company website that's hosted as https://foo.bar.com.
However, it was incorrectly conveyed to a lot of users that the URL would be www.foo.bar.com. Until this can be rectified, we are putting through an interim solution by setting up a proxy site www.foo.bar.com that will redirect any users coming to it to https://foo.bar.com.
This works... but only the first time the user navigates to the page. The next time I try to access www.foo.bar.com, due to caching, the browser takes me to https://www.foo.bar.com. We don't have a certificate set up for https://www.foo.bar.com and as a result are given a NET::ERR_CERT_COMMON_NAME_INVALID error.
Is there a way to work around this without needing a certificate?
To test, I've even tried returning a webpage when the I navigate to www.foo.bar.com with a link that navigates to https://foo.bar.com. However, the same issue happens even in this case. I'm guessing HSTS is at play here but not sure how to go about it.
I'd appreciate any insight into this matter, thank you in advance.
I belive the only solution to your problem is to obtain a valid certificate for www.foo.bar.com. Due to the certificate error the browsers will not attempt to communicate with your server so there's no way for you to issue a redirect away from wrong domain to the correct domain.
Why only the second time?
You mention HSTS so I am assuming https://foo.bar.com is sending a Strict-Transport-Security header as part of it's response. This header likely is being sent with the includeSubDomains option which instructs the browser to not only enforce HTTPS on foo.bar.com but also all subdomains of that main domain. As a result, when trying to request www.foo.bar.com the browser matches that HSTS rule and automatically re-writes it to use HTTPS.
Once this HSTS rule has been set in the browser it cannot be removed except by expiring, either by exceeing the original max-age time or by issuing another Strict-transport-security header with max-age=0 on https://foo.bar.com
Kind of a 101 question about X-Frame-Options and/or Content-Security-Policy: frame-ancestors: if one intends to develop an application using iframed production sites (on which I can adjust headers) on a local machine, would they have to add localhost to frame-ancestors in the Content-Security-Policy? Will X-Frame-Options SAMEORIGIN not work at all?
You would want to strip those headers from the framed response so they don't prevent rendering.
Locally, the only thing that applies would be frame-src coming in the localhost response allowing you to embed your production sites (not setting csp at all would also work).
I am adding the HTTP Strict Transport Security header to a website.
Does it prevent loading of resources over HTTP that are not in same domain?
HSTS only applies to the domain it's sent with, and any subdomains if the includeSubDomains attribute is also set.
Any other domains are unaffected.
However one thing to be careful of is if your main domain (www.example.com) uses the same config as the bare domain (example.com), which is quite common, and you issue the HSTS header on both (perhaps without realising it's also on the bare domain) and use the includeSubDomains header. If that's the case then you can easily block access to other domains you did not intend to, which are still on http (e.g. http://blog.example.com or http://internal.example.com) if someone visits the bare domain.
BTW if you were wanting to upgrade all the http requests to https you could use Content Security Policy (CSP) which has an upgrade-insecure-requests option. However browser support of that is not yet universal. You can also use CSP to help you identify mixed content as discussed here.