While reading through https://hstspreload.org I noticed in section "Deployment Recommendations" that I should "Add the Strict-Transport-Security header to all HTTPS responses...".
Because of including HSTS-policy to all https responses sounds overkill to me, I examined a few websites to check if they really all include this header field in all their https responses. But not even google is doing it, e.g. https://www.google.com/doodles has no Strict-Transport-Security header field in the response.
So my question is when should a server response include HSTS-policy?
The options I see here are:
include HSTS in every https response.
include HSTS in every security relevant https response.
include HSTS only for e.g. example.com but not for any paths like example.com/mypath
I mean sooner or later they gonna visit example.com anyway, no?
include HSTS only if request has "upgrade-insecure-requests: 1" field
I noticed that Chrome is sending this request header field in security relevant stuff if HSTS was not set.
I don’t think it’s overkill to add it to every resource. It’s a very small header and ensures the best change of the HSTS policy being seen.
Many people even load a pixel from the base domain (e.g. www.example.com can load https://example.com/1pixel.png) to ensure the base domain HSTS policy is loaded as well. If you configure HSTS to only be delivered on documents then this is not picked up.
I certain would not include it only on the home page. That’s not a valid assumption to say that sooner or later they visit it.
What’s your concern here? You have a super optimised site that will be killed by serving this header with each resource? For CSP I’d understand where you were coming from as that header can get very large but for HSTS I really think you’re over thinking this. Also if using HTTP/2 then header compression solves this too. Plus the config needed to only return it on some resources would be added complexity and hassle you don’t really need.
I have a company website that's hosted as https://foo.bar.com.
However, it was incorrectly conveyed to a lot of users that the URL would be www.foo.bar.com. Until this can be rectified, we are putting through an interim solution by setting up a proxy site www.foo.bar.com that will redirect any users coming to it to https://foo.bar.com.
This works... but only the first time the user navigates to the page. The next time I try to access www.foo.bar.com, due to caching, the browser takes me to https://www.foo.bar.com. We don't have a certificate set up for https://www.foo.bar.com and as a result are given a NET::ERR_CERT_COMMON_NAME_INVALID error.
Is there a way to work around this without needing a certificate?
To test, I've even tried returning a webpage when the I navigate to www.foo.bar.com with a link that navigates to https://foo.bar.com. However, the same issue happens even in this case. I'm guessing HSTS is at play here but not sure how to go about it.
I'd appreciate any insight into this matter, thank you in advance.
I belive the only solution to your problem is to obtain a valid certificate for www.foo.bar.com. Due to the certificate error the browsers will not attempt to communicate with your server so there's no way for you to issue a redirect away from wrong domain to the correct domain.
Why only the second time?
You mention HSTS so I am assuming https://foo.bar.com is sending a Strict-Transport-Security header as part of it's response. This header likely is being sent with the includeSubDomains option which instructs the browser to not only enforce HTTPS on foo.bar.com but also all subdomains of that main domain. As a result, when trying to request www.foo.bar.com the browser matches that HSTS rule and automatically re-writes it to use HTTPS.
Once this HSTS rule has been set in the browser it cannot be removed except by expiring, either by exceeing the original max-age time or by issuing another Strict-transport-security header with max-age=0 on https://foo.bar.com
why google.com not set includeSubDomains directive on http strict transport security response header ?
google.com HSTS resonse header is something like:
Strict-Transport-Security:max-age=86400
Why not
Strict-Transport-Security: max-age=86400; includeSubDomains
The second one should be more secure from my side, is that right ??
It is static
Using Google Chrome, you can go to chrome://net-internals/#hsts and Query different domains. Entering google.com and clicking on Query will bring back a list of results.
In that result list, you can see that static_sts_include_subdomains is true and dynamic_sts_include_subdomains is false. This is better than setting it dynamically, which is vulnerable to an attack whereby the very first time the browser requests the domain with http:// (not https://) an adversary intercepts the communication. In order to overcome this weakness we have the static mode which allows for hard-coding HSTS records directly into the browser's source.
Hope this helps
Yes it is more secure to use includeSubDomains.
For example an attacker could set up and use a subdomain (e.g. hacked.google.com) and access that over HTTP and and use that to access or override cookies set at top level domain (google.com) even though that top level domain is secured with HSTS. Of course if you're using Secure attribute on your cookies then this might not be an issue but this is just one example of why to use includeSubDomains.
You cannot set the includeSubDomains attrribute unless all subdomains are available on HTTPS (obviously). So if Google had blog.google.com and had still not upgraded this to HTTPS then that might explain why they would not use includeSubDomains at the top level domain.
However, as #Horkine rightly points out, Google preloads their domains into the Chrome browser code (and that preload list is also picked up by other browsers) so this HTTP header isn't used for modern browsers.
Weirldy there are some inconsitencies between the preloaded version and the HTTP HTTP Headers version. That is very odd to be honest. Incidentally these discrepancies also breaks their own rules for preloading.
www.google.com
The preloaded version for www.google.com does have the includeSubDomains attribute.
The Strict-Transport-Security HTTP Header version does not have the includeSubDomains attribute but not the preload attribute.
google.com
The preloaded version for google.com does have the includeSubDomains attribute
No Strict-Transport-Security HTTP header is published.
Why these inconsistencies? We can only guess:
It could be that they never got round to updating their HTTP Header after finishing the upgrade to all their sites?
Or it could be because some of the apps do browser detection for older browsers (which do not include the preload code, but does understand the HSTS header) and redirects older browsers to http://old.google.com for some reason?
Or it could be region specific?
All of it is a guess really, as only Google can answer and I'm not aware of any documentation of what they use on their own site or why.
But, yes, to answer you last question it is more secure to include includeSubDomains (if possible) and it is even more secure to preload (though not without risks unless you are 100% confident you are only using HTTPS).
I had trouble getting AWS CloudFront to work with SquareSpace. Issues with forms not submitting and the site saying website expired. What are the settings that are needed to get CloudFront working with a Squarespace site?
This is definitely doable, considering I just set this up. Let me share the settings I used on Cloudfront, Squarespace, and Route53 to make it work. If you want to use a different DNS provide than AWS Route53, you should be able to adapt these settings. Keep in mind that this is not an e-commerce site, but a standard site with a blog, static pages, and forms. You can likely adapt these instructions for other issues as/if they come up.
Cloudfront (CDN)
To make this work, you need to create a Cloudfront Distribution for Web.
Origin Settings
Origin Domain Name should be set to ext-cust.squarespace.com. This is Squarespace's entry point for external domain names.
Origin Path can be left blank.
Origin ID is just the unique ID for this distribution and should auto-populate if you're on the distribution creation screen, or be fixed if you're editing Origin Settings later.
Origin Custom Headers do not need to be set.
Default Cache Behavior Settings / Behaviors
Path Patterns should be left at Default.
I have Viewer Protocol Policy set to Redirect HTTP to HTTPS. This dictates whether your site can use one or both of HTTP or HTTPS. I prefer to have all traffic routed securely, so I redirect all HTTP traffic to HTTPS. Note that you cannot do the reverse and redirect HTTPS to HTTP, as this will cause authentication issues (your browser doesn't want to expose what you thought was a secure connection).
Allowed HTTP Methods needs to be GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE. This is because forms (and other things such as comments, probably) use the POST HTTP method to work.
Cached HTTP Methods I left to just GET, HEAD. No need for anything else here.
Forward Headers needs to be set to All or Whitelist. Squarespace's entry point we mentioned earlier needs to know where what domain you're coming from to serve your site, so the Host header must be whitelisted, or allowed with everything else if set to All.
Object Caching, Minimum TTL, Maximum TTL, and Default TTL can all be left at their defaults.
Forward Cookies cookies is the missing component to get forms working. Either you can set this to All, or Whitelist. There are certain session variables that Squarespace uses for validation, security, and other utilities. I have added the following values to Whitelist Cookies: JSESSIONID, SS_MID, crumb, ss_cid, ss_cpvisit, ss_cvisit, test. Make sure to put each value on a separate line, without commas.
Forward Query Strings is set to True, as some Squarespace API calls use query strings so these must be passed along.
Smooth Streaming, Restrict Viewer Access, and Compress Objects Automatically can all be left at their default values, or chosen as required if you know you need them to be set differently.
Distribution Settings / General
Price Class and AWS WAF Web ACL can be left alone.
Alternate Domain Names should list your domain, and your domain with the www subdomain attached, e.g. example.com, www.example.com.
For SSL Certificate, please follow the tutorial here to upload your certificate to IAM if you haven't already, then refresh your certificates (there is a control next to the dropdown for this), select Custom SSL Certificate and select the one you've provisioned. This ensures that browsers recognize your SSL over HTTPS as valid. This is not necessary if you're not using HTTPS at all.
All following settings can be left at default, or chosen to meet your own specific requirements.
Route 53 (DNS)
You need to have a Hosted Zone set up for your domain (this is specific to Route 53 setup).
You need to set an A record to point to your Cloudfront distribution.
You should set a CNAME record for the www subdomain name pointing to your Cloudfront distribution, even if you don't plan on using it (later we'll go through setting Squarespace to only use the root domain by redirecting the www subdomain)
Squarespace
On your Squarespace site, you simply need to go to Settings->Domains->Connect a Third-Party Domain. Once there, enter your domain and continue. Under the domain's settings, you can uncheck Use WWW Prefix if you'd like people accessing your site from www.example.com to redirect to the root, example.com. I prefer this, but it's up to you. Under DNS Settings, the only value you need is CNAME that points to verify.squarespace.com. Add this CNAME record to your DNS settings on Route 53, or other DNS provider. It won't ever say that your connection has been fully completed since we're using a custom way of deploying, but that won't matter.
Your site should now be operating through Cloudfront pointing to your Squarespace deployment! Please note that DNS propogation takes time, so if you're unable to access the site, give it some time (up to several hours) to propogate.
Notes
I can't say exactly whether each and every one of the values set under Whitelist Cookies is necessary, but these are taken from using the Chrome Inspector to determine what cookies were present under the Cookie header in the request. Initially I tried to tell Cloudfront to whitelist the Cookie header itself, but it does not allow that (presumably because it wants you to use the cookie-specific whitelist). If your deployment is not working, see if there are more cookies being transmitted in your requests (under the Cookie header, the values you're looking for should look like my_cookie=somevalue;other_cookie=othervalue—my_cookie and other_cookie in my example are what you'd add to the whitelist).
The same procedure can be used to forward other headers entirely that may be needed via the Forward Headers whitelist. Simply inspect and see if there's something that looks like it might need to go through.
Remember, if you're not whitelisting a header or cookie, it's not getting to Squarespace. If you don't want to bother, or everything is effed (pardon my language), you can always set to allow all headers/cookies, although this adversely affects caching performance. So be conservative if you can.
Hope this helps!
Here are the settings to get CloudFront working with Squarespace!
Behaviours:
Allowed HTTP Methods Ensure that you select: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE. Otherwise forms will not work:
Forward Headers: Select whitelist and choose 'Host'. Otherwise squarespace will not know which website they need to load up and you get the message 'Website has expired' or similar.
Origins:
Origin Domain Name set as: ext-cust.squarespace.com
Origin Protocol Policy Select HTTPS so that traffic between the CDN and the origin is secure too
General
Alternate Domain Names (CNAMEs) put both your www and none www addresses here and let Squarespace decide on if to direct www to root or vice-versa (.e.g example.com www.example.com)
You can now configure SSL on CloudFront
HTTPS You can now enforce HTTPS using a certificate for your site here rather than in Squarespace
Setting I'm unsure about still:
Forward Query Strings: recommended not for caching reasons but I think this could break things...
Route53
Create A records for www and root (e.g. example.com www.example.com) and set as an alias to your CloudFront distribution
This question is specifically about page rules in Cloudflare, which allow you to specify wildcard patterns on your site using rules - and handle each pattern differently.
One of the patterns is "Force SSL" - in effect, any request that matches that pattern will be forced down the path of https:// - whether that's Flexible SSL or otherwise.
The problem with choosing this option is that all other options over the CDN/cache time, etc. disappear.
This raises some obvious issues to which I've found no clear answer:
If Cloudflare serves a https:// resource, does it still cache static resources?
How do I control the nature of the resources cached? In other words, the settings equivalent to "Simple" caching, and "Aggressive" caching.
Is there any ability to set options such as cache expiry, time that they reside on edge servers before expiration, etc?
Is it possible to set "Cache Everything" when serving requests over https://? It certainly exists on the http:// equivalent.
I would like Cloudflare to re-direct my visitors from http:// to https:// automatically as opposed to do it on my app, because the various apps on my domain (Wordpress included) have various quirks that make configuring each one both tedious and error-prone.
you can add another rule for caching for https - the first rule would be to divert all http to https with another rule right after that to handle the https traffic.
"If Cloudflare serves a https:// resource, does it still cache static resources?"
Yes. It doesn't matter if it is http or https://
What CloudFlare caches by default
"How do I control the nature of the resources cached? In other words, the settings equivalent to "Simple" caching, and "Aggressive" caching."
By using those settings in your performance settings.
"Is it possible to set "Cache Everything" when serving requests over https://? It certainly exists on the http:// equivalent."
I would actually recommend not doing cache everything, really. While it is an option that is available, you could have issues with users that have to sign in, etc.
"Is there any ability to set options such as cache expiry, time that they reside on edge servers before expiration, etc?"
You can set a browser cache TTL in your performance settings; we should also honor the expire headers you have set on your server.