Dealing with CORS and Cookie scope across subdomains - asp.net

I'm having difficulty reconciling some conflicting information from StackOverflow and other sources regarding the use of calls across sub-domains. Consider these two independent sites that share a common domain:
site #1: www.myDomain.com
site #2: sub.myDomain.com
Requirements:
Site #1 must be able to execute an AJAX call to site #2 by way of sub.myDomain.com/handler.ashx.
Site #1 and Site #2 must be able to read each other's cookies.
These requirements lead me to the following questions:
Does the handler code located at sub.myDomain.com/handler.ashx need to alter its response headers to allow CORS? I know that I can write a call like this:
resp.Headers.Add("Access-Control-Allow-Origin","*");
…but from what I read, this will expose the handler to all domains. I just want to limit the calls to those originating from *.myDomain.com. What if I don't include the CORS header at all? What's the default behavior?
Do Site #1 and/or Site #2 need to tweak the Domain property of HttpCookie in order for the two sites to read each other's cookies?
What if I don't touch the Domain properties at all? What's the default behavior? Some forum responses suggest that cookie scope will be limited to the subdomain, while others suggest the entire domain is in scope (which is what I want) in which case no action would be required on my part.

If you add "*" for attribute "Access-Control-Allow-Origin" you allow to all sites to call your handler.
in www.myDomain.com handler response you have to add something like this
context.Response.AppendHeader("Access-Control-Allow-Origin", "sub.myDomain.com");
In this way only sub.myDomain.con can have a response from www.myDomain.com

The CORS spec is all-or-nothing. It can supports *, null or the exact domain: http://www.w3.org/TR/cors/#access-control-allow-origin-response-header
In your ASHX handler you will need to validate the origin header using the regex, and then you can echo the origin value in the Access-Control-Allow-Origin response header.

Related

Disable cache sharing among websites

Is there a way to tell the browser not to share a cached resource among websites?
I want to give websites a link to some JavaScript on my server and I want to make the response be different for each domain using the Referer header as check.
The response which will be cached should be available to the domain that requested it and when the end users visit another site that uses the script link, another request should be made.
I don't know whether I understand your question.
Does your scenario like: stackoverflow.com and yourwebsite.com use the same script called "https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js", but you don't want to share the cached script with stackoverflow.com
This is under the control of googleapis.com's web server.
So if the cached resource's origin server(googleapis.com) want to implement the feature as you said, it may use the Vary response header. Vary Header define the secondary key of cache.
Maybe "Vary: Origin" but only work for CORS
Maybe "Vary: referer" but referer contains url path
It still doesn't solve your problem but I hope it helps.
see MDN HTTP Cache Doc and [RFC 7234 Section 4.1]

How does CORS (Access-Control-Allow-Origin header) increase security?

I'm doing some work with this right now and I have to say, it makes no sense at all to me! Basically, I have some CDN server which provides css, images ect for a site. For whatever reason, in order for my browser to stop blocking those resources with a CORS error, I had to have that server (the CDN) add the Access-Control-Allow-Origin header. But as far as I can tell that does absolutely nothing to increase security. Shouldn't the page I request which references those cross-domain resources be telling the browser it's safe to get stuff from the other domain? If that were a malicious domain wouldn't it just have the Access-Control-Allow-Origin set to * so that sites load their malicious responses (you don't have to answer that because obviously they would)?
So can someone explain how this mechanism/feature provides security? As far as I can tell the implementors fucked up and it actually does nothing. The header should be required from the page which references/requests cross-domain resources rather than from that domain being requested.
To be clear; if I request a page at domain A it would make sense for the response to include the Access-Control-Allow-Origin header white listing resources from domain B (Access-Control-Allow-Origin:.B.com), however it makes no sense at all for domain B to effectively white list itself by providing the header; Access-Control-Allow-Origin: which is how this is currently implemented. Can anyone clarify what the benefit of this feature is?
If I have a protected resource hosted on site A, but also control sites B, C, and D, I may want to use that resource on all of my sites but still prevent anyone else from using that resource on theirs. So I instruct my site A to send Access-Control-Allow-Origin: B, C, D along with all of its responses. It's up to the web browser itself to honor this and not serve the response to the underlying Javascript or whatever initiated the request if it didn't come from an allowed origin. Error handlers will be invoked instead. So it's really not for your security as much as it's an honor-system (all major browsers do this) access control method for servers.
Primarily Access-Control-Allow-Origin is about protecting data from leaking from one server (lets call it privateHomeServer.com) to another server (lets call it evil.com) via an unsuspecting user's web browser.
Consider this scenario:
You are on your home network browsing the web when you accidentally stumble onto evil.com. This web page contains malicious javascript that tries to look for web servers on your local home network and then sends their content back to evil.com. It does this by trying to open XMLHttpRequests on all local IP addresses (eg. 192.168.1.1, 192.168.1.2, .. 192.168.1.255) until it finds a web server.
If you are using an old web browser that isn't Access-Control-Allow-Origin aware or you have set Access-Control-Allow-Origin * on your privateHomeServer then your browser would happily retrieve the data from your privateHomeServer (which presumably you didn't bother passwording as it was safely behind your home firewall) and then handing that data to the malicious javascript which can then send the information on to the evil.com server.
On the other hand using an Access-Control-Allow-Origin aware browser and default web configuration on privateHomeServer (ie. not sending Access-Control-Allow-Origin *) your web browser would block the malicious javascript from seeing any data retrieved from privateHomeServer. So this way you are protected from such attacks unless you go out of your way to change the default configuration on your server.
Regarding the question:
Shouldn't the page I request which references those cross-domain
resources be telling the browser it's safe to get stuff from the other
domain?
The fact that your page contains code that is attempting to get resources from a particular server is implicitly telling the web browser that you believe the resources are safe to fetch. It wouldn't make sense to need to repeat this again elsewhere.
CORS makes only sense for Mashup content provider and nothing more.
Example: You are a provider of a embedded maps mashup service which requires a registration. Now you want to make sure that your ajax mashup map will only work for your registered users on their domains. Other domains should be excluded. Only for this reason CORS makes sense.
Another example: Someone misuse CORS for a REST-Service. The clever developer set up a ajax proxy and et voilà you can access from every domain on that service.
Such a ajax proxy would make no sense for a mashup, on the other way the CORS makes no sense for REST-Services, because you could bypass the restriction with a simple http-client.

What settings are required to put AWS CloudFront CDN in front of a squarespace website?

I had trouble getting AWS CloudFront to work with SquareSpace. Issues with forms not submitting and the site saying website expired. What are the settings that are needed to get CloudFront working with a Squarespace site?
This is definitely doable, considering I just set this up. Let me share the settings I used on Cloudfront, Squarespace, and Route53 to make it work. If you want to use a different DNS provide than AWS Route53, you should be able to adapt these settings. Keep in mind that this is not an e-commerce site, but a standard site with a blog, static pages, and forms. You can likely adapt these instructions for other issues as/if they come up.
Cloudfront (CDN)
To make this work, you need to create a Cloudfront Distribution for Web.
Origin Settings
Origin Domain Name should be set to ext-cust.squarespace.com. This is Squarespace's entry point for external domain names.
Origin Path can be left blank.
Origin ID is just the unique ID for this distribution and should auto-populate if you're on the distribution creation screen, or be fixed if you're editing Origin Settings later.
Origin Custom Headers do not need to be set.
Default Cache Behavior Settings / Behaviors
Path Patterns should be left at Default.
I have Viewer Protocol Policy set to Redirect HTTP to HTTPS. This dictates whether your site can use one or both of HTTP or HTTPS. I prefer to have all traffic routed securely, so I redirect all HTTP traffic to HTTPS. Note that you cannot do the reverse and redirect HTTPS to HTTP, as this will cause authentication issues (your browser doesn't want to expose what you thought was a secure connection).
Allowed HTTP Methods needs to be GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE. This is because forms (and other things such as comments, probably) use the POST HTTP method to work.
Cached HTTP Methods I left to just GET, HEAD. No need for anything else here.
Forward Headers needs to be set to All or Whitelist. Squarespace's entry point we mentioned earlier needs to know where what domain you're coming from to serve your site, so the Host header must be whitelisted, or allowed with everything else if set to All.
Object Caching, Minimum TTL, Maximum TTL, and Default TTL can all be left at their defaults.
Forward Cookies cookies is the missing component to get forms working. Either you can set this to All, or Whitelist. There are certain session variables that Squarespace uses for validation, security, and other utilities. I have added the following values to Whitelist Cookies: JSESSIONID, SS_MID, crumb, ss_cid, ss_cpvisit, ss_cvisit, test. Make sure to put each value on a separate line, without commas.
Forward Query Strings is set to True, as some Squarespace API calls use query strings so these must be passed along.
Smooth Streaming, Restrict Viewer Access, and Compress Objects Automatically can all be left at their default values, or chosen as required if you know you need them to be set differently.
Distribution Settings / General
Price Class and AWS WAF Web ACL can be left alone.
Alternate Domain Names should list your domain, and your domain with the www subdomain attached, e.g. example.com, www.example.com.
For SSL Certificate, please follow the tutorial here to upload your certificate to IAM if you haven't already, then refresh your certificates (there is a control next to the dropdown for this), select Custom SSL Certificate and select the one you've provisioned. This ensures that browsers recognize your SSL over HTTPS as valid. This is not necessary if you're not using HTTPS at all.
All following settings can be left at default, or chosen to meet your own specific requirements.
Route 53 (DNS)
You need to have a Hosted Zone set up for your domain (this is specific to Route 53 setup).
You need to set an A record to point to your Cloudfront distribution.
You should set a CNAME record for the www subdomain name pointing to your Cloudfront distribution, even if you don't plan on using it (later we'll go through setting Squarespace to only use the root domain by redirecting the www subdomain)
Squarespace
On your Squarespace site, you simply need to go to Settings->Domains->Connect a Third-Party Domain. Once there, enter your domain and continue. Under the domain's settings, you can uncheck Use WWW Prefix if you'd like people accessing your site from www.example.com to redirect to the root, example.com. I prefer this, but it's up to you. Under DNS Settings, the only value you need is CNAME that points to verify.squarespace.com. Add this CNAME record to your DNS settings on Route 53, or other DNS provider. It won't ever say that your connection has been fully completed since we're using a custom way of deploying, but that won't matter.
Your site should now be operating through Cloudfront pointing to your Squarespace deployment! Please note that DNS propogation takes time, so if you're unable to access the site, give it some time (up to several hours) to propogate.
Notes
I can't say exactly whether each and every one of the values set under Whitelist Cookies is necessary, but these are taken from using the Chrome Inspector to determine what cookies were present under the Cookie header in the request. Initially I tried to tell Cloudfront to whitelist the Cookie header itself, but it does not allow that (presumably because it wants you to use the cookie-specific whitelist). If your deployment is not working, see if there are more cookies being transmitted in your requests (under the Cookie header, the values you're looking for should look like my_cookie=somevalue;other_cookie=othervalue—my_cookie and other_cookie in my example are what you'd add to the whitelist).
The same procedure can be used to forward other headers entirely that may be needed via the Forward Headers whitelist. Simply inspect and see if there's something that looks like it might need to go through.
Remember, if you're not whitelisting a header or cookie, it's not getting to Squarespace. If you don't want to bother, or everything is effed (pardon my language), you can always set to allow all headers/cookies, although this adversely affects caching performance. So be conservative if you can.
Hope this helps!
Here are the settings to get CloudFront working with Squarespace!
Behaviours:
Allowed HTTP Methods Ensure that you select: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE. Otherwise forms will not work:
Forward Headers: Select whitelist and choose 'Host'. Otherwise squarespace will not know which website they need to load up and you get the message 'Website has expired' or similar.
Origins:
Origin Domain Name set as: ext-cust.squarespace.com
Origin Protocol Policy Select HTTPS so that traffic between the CDN and the origin is secure too
General
Alternate Domain Names (CNAMEs) put both your www and none www addresses here and let Squarespace decide on if to direct www to root or vice-versa (.e.g example.com www.example.com)
You can now configure SSL on CloudFront
HTTPS You can now enforce HTTPS using a certificate for your site here rather than in Squarespace
Setting I'm unsure about still:
Forward Query Strings: recommended not for caching reasons but I think this could break things...
Route53
Create A records for www and root (e.g. example.com www.example.com) and set as an alias to your CloudFront distribution

new to cross domain CORS

I am new to this thing, so there is some questions I wanted to ask after looking up bunch of site that related to CORS.
First of all, lets say i have http://domain1.com that has a ajax call to http://domain2.com, I look up on http://enable-cors.org/server.html it say that I will have to add
Access-Control-Allow-Origin: *
this response to my page header or add this setting to web.config on the root directory of my application, but I was confused, should I add the response header to domain1 or domain2 application? My guess was add to domain2, but I cannot be sure because I don't have the required things to test it.
Furthermore, what if domain2.com were in https, means I am calling from http to https, will this works?
and how about IE?
You should add it on http://domain2.com because Access-Control-Allow-Origin is permission for http://domain1.com to get information from http://domain2.com.
Note that (*) symbol means that domain allows access to everyone, so you need to be careful with that. Better option would be:
Access-Control-Allow-Origin: http://domain1.com
It work fine for IE and for https:
Access-Control-Allow-Origin: http://domain1.com, https://domain1.com
Take a look for more information here.

How to respect "Serve static content from a cookieless domain" page speed rule in IIS6?

How to respect "Serve static content from a cookieless domain" page speed rule in IIS6?
To create a cookieless site (or subdomain, which is a very common best-practice) in IIS6/IIS7/IIS7.5 is simple : you need to tell the website that you are not to use cookies :) Which means in IIS terms, not to use a session.
This can be achieved in IIS6/IIS7 via two ways.
Modifying the Web.config file (my personal recommendation)
Using the IIS Manager GUI to find the setting and changing it.
IMPORTANT
Before you do any testing, you must must must clear all cookies (or all cookies for the domain u are testing) otherwise, they will get passed along even if u have done all the steps.
1. Via Config File
You need to define the session state to off.
<system.web>
<sessionState cookieName="What_ever" mode="Off" />
</system.web>
NOTE: Please note that the attribute cookieless (true|false) does NOT mean 'send cookies/do not sent cookies). That's for using sessions with/without cookies ... and passes some cookie guid into the url instead (if set to true).
2. Via Gui
Hope this Helps (i assume u know how to test that no cookies are working/not working...)
What this means is that your content needs to come from a domain that has no cookies attached to it. StackOverflow.com is an example of a site that does this. You will notice that all SO's static content comes from a domain called sstatic.net.
http://sstatic.net/stackoverflow/all.css
http://sstatic.net/js/master.js
This is so that the client and the server don't have to waste resources on actually parsing and handling cookie data. The good news is, you can use a sub-domain, assuming that you set your cookie path correctly.
Yahoo Best Practices for Speeding Up
Your Web Site
Use Cookie-free Domains for Components
When the browser makes a request for a
static image and sends cookies
together with the request, the server
doesn't have any use for those
cookies. So they only create network
traffic for no good reason. You should
make sure static components are
requested with cookie-free requests.
Create a subdomain and host all your
static components there. If your
domain is www.example.org, you can
host your static components on
static.example.org. However, if you've
already set cookies on the top-level
domain example.org as opposed to
www.example.org, then all the requests
to static.example.org will include
those cookies. In this case, you can
buy a whole new domain, host your
static components there, and keep this
domain cookie-free. Yahoo! uses
yimg.com, YouTube uses ytimg.com,
Amazon uses images-amazon.com and so
on.
Another benefit of hosting static
components on a cookie-free domain is
that some proxies might refuse to
cache the components that are
requested with cookies. On a related
note, if you wonder if you should use
example.org or www.example.org for
your home page, consider the cookie
impact. Omitting www leaves you no
choice but to write cookies to
*.example.org, so for performance reasons it's best to use the www
subdomain and write the cookies to
that subdomain.
create subdomain ( for example static.example.com ) and store all static content(images, css, js) here

Resources