How to enable simple CORS on nginx - nginx

I installed Nginx on my laptop. My web server contains DASH streaming on-demand using the dash.js player which only hosted on localhost. I want to restrict only DASH dataset from localhost that can be used in that player. Can I use CORS for my purpose? I tried adding
location /{
add_header 'Access-Control-Allow-Origin' 'http://localhost';
}
but still any DASH dataset can still use the player which hosted on localhost. How to enable simple CORS features on Nginx? Is my understanding about CORS is wrong?
Thanks

I want to restrict only DASH dataset from localhost that can be used in that player. Can I use CORS for my purpose?
Not really. CORS is used for getting at resources cross-domain. If a player can natively play DASH (which none of the browsers do currently), then the content will play on any page, CORS support or not. The way DASH players work in-browser today is by loading the resources via XHR requests and sending the data with the media source extension API. To do this, the CORS headers are needed.
Cross-origin request blocking isn't really meant to prevent access to a resource. It's to prevent scripts on one page from accessing resources belonging to another page, effectively impersonating a user. Access-Control-Allow-Origin headers enable other pages to access those resources by effectively saying that the resource queried is safe for use.
If you want to actually block access to something, you should use allow/deny. http://nginx.org/en/docs/http/ngx_http_access_module.html

Related

Correct headers for Chromecast

I'm trying to play a .mp4 video hosted on my NGINX server with the Default Receiver App for Chromecast.
I'm able to cast the videos used in their example apps just fine, but my own video fails without returning any error. I'm guessing this has to do with the CORS configuration on my server.
I'm using this config to enable cors on my server. I've tried adding gstatic.com to allowed origins aswell but it doesn't help. I took a look on the headers one of their example videos and tried to reverse engineer what headers I'm missing and still can't get it to work.
What headers do I need to enable for the Chromecast to play my files?
The server that hosts the content needs to set the necessary CORS headers for playback to work.

Downsides of 'Access-Control-Allow-Origin: *'?

I have a website with a separate subdomain for static files. I found out that I need to set the Access-Control-Allow-Origin header in order for certain AJAX features to work, specifically fonts. I want to be able to access the static subdomain from localhost for testing as well as from the www subdomain. The simple solution seeems to be Access-Control-Allow-Origin: *. My server uses nginx.
What are the main reasons that you might not want to use a wildcard for Access-Control-Allow-Origin in your response header?
You might not want to use a wildcard when e.g.:
Your web and let’s say its AJAX backend API are running on different domains, or just on different ports and you do not want to expose backend API to whole Internet, then you do not send *. For example your web is on http://www.example.com and backend API on http://api.example.com, then the API would respond with Access-Control-Allow-Origin: http://www.example.com.
If the API wants to request cookies from client it must not send Access-Control-Allow-Origin: *, but its value must be the value of the origin from the actual request.
For testing, actually adding entry in /ets/hosts file for 127.0.0.1/server-public-ip dev.mydomain.com is a decent workaround.
Other way can be to have another domain served by nginx itself like dev.mydomain.com pointing to the same/test-instance of backend servers & static-web-root with some security measures like:
satisfy all;
allow <YOUR-CIDR/IP>;
deny all;
Clarification on: Access-Control-Allow-Origin: *
This setting protects the users of your website from being scammed/hijacked while visiting other evil-websites in a modern-browser which respects this policy (all known browsers should do).
This setting does not protect the webservice from scraper scripts to access your static-assets & APIs at rapid speed - doing bruteforce attacks/bulk downloading/causing load etc.
P.S: (1) For development: you can consider using a free, low-footprint private-p2p vpn-like network b/w your development box & server: https://tailscale.com/
In my opinion, is that you could have other websites consuming your API without your explicit permission.
Imagine you have an e-commerce, another website could do all the transactions using their own look and feel but backed by you, for you, in the end, it is good because you will get the money in the end but your brand will lose its "recognition".
Another problem could be if this website would change the sent payload to your backend doing things like changing the delivery address and other things.
The idea behind is just to not authorize unknown websites to consume your API and show its result to users.
You could use the hosts file to map 127.0.0.1 to your domain name, "dev.mydomain.com", as you do not like to use Access-Control-Allow-Origin: *.

How does CORS (Access-Control-Allow-Origin header) increase security?

I'm doing some work with this right now and I have to say, it makes no sense at all to me! Basically, I have some CDN server which provides css, images ect for a site. For whatever reason, in order for my browser to stop blocking those resources with a CORS error, I had to have that server (the CDN) add the Access-Control-Allow-Origin header. But as far as I can tell that does absolutely nothing to increase security. Shouldn't the page I request which references those cross-domain resources be telling the browser it's safe to get stuff from the other domain? If that were a malicious domain wouldn't it just have the Access-Control-Allow-Origin set to * so that sites load their malicious responses (you don't have to answer that because obviously they would)?
So can someone explain how this mechanism/feature provides security? As far as I can tell the implementors fucked up and it actually does nothing. The header should be required from the page which references/requests cross-domain resources rather than from that domain being requested.
To be clear; if I request a page at domain A it would make sense for the response to include the Access-Control-Allow-Origin header white listing resources from domain B (Access-Control-Allow-Origin:.B.com), however it makes no sense at all for domain B to effectively white list itself by providing the header; Access-Control-Allow-Origin: which is how this is currently implemented. Can anyone clarify what the benefit of this feature is?
If I have a protected resource hosted on site A, but also control sites B, C, and D, I may want to use that resource on all of my sites but still prevent anyone else from using that resource on theirs. So I instruct my site A to send Access-Control-Allow-Origin: B, C, D along with all of its responses. It's up to the web browser itself to honor this and not serve the response to the underlying Javascript or whatever initiated the request if it didn't come from an allowed origin. Error handlers will be invoked instead. So it's really not for your security as much as it's an honor-system (all major browsers do this) access control method for servers.
Primarily Access-Control-Allow-Origin is about protecting data from leaking from one server (lets call it privateHomeServer.com) to another server (lets call it evil.com) via an unsuspecting user's web browser.
Consider this scenario:
You are on your home network browsing the web when you accidentally stumble onto evil.com. This web page contains malicious javascript that tries to look for web servers on your local home network and then sends their content back to evil.com. It does this by trying to open XMLHttpRequests on all local IP addresses (eg. 192.168.1.1, 192.168.1.2, .. 192.168.1.255) until it finds a web server.
If you are using an old web browser that isn't Access-Control-Allow-Origin aware or you have set Access-Control-Allow-Origin * on your privateHomeServer then your browser would happily retrieve the data from your privateHomeServer (which presumably you didn't bother passwording as it was safely behind your home firewall) and then handing that data to the malicious javascript which can then send the information on to the evil.com server.
On the other hand using an Access-Control-Allow-Origin aware browser and default web configuration on privateHomeServer (ie. not sending Access-Control-Allow-Origin *) your web browser would block the malicious javascript from seeing any data retrieved from privateHomeServer. So this way you are protected from such attacks unless you go out of your way to change the default configuration on your server.
Regarding the question:
Shouldn't the page I request which references those cross-domain
resources be telling the browser it's safe to get stuff from the other
domain?
The fact that your page contains code that is attempting to get resources from a particular server is implicitly telling the web browser that you believe the resources are safe to fetch. It wouldn't make sense to need to repeat this again elsewhere.
CORS makes only sense for Mashup content provider and nothing more.
Example: You are a provider of a embedded maps mashup service which requires a registration. Now you want to make sure that your ajax mashup map will only work for your registered users on their domains. Other domains should be excluded. Only for this reason CORS makes sense.
Another example: Someone misuse CORS for a REST-Service. The clever developer set up a ajax proxy and et voilà you can access from every domain on that service.
Such a ajax proxy would make no sense for a mashup, on the other way the CORS makes no sense for REST-Services, because you could bypass the restriction with a simple http-client.

http redirects to https

What would cause a site to try an go to an https url?
We have sitecore set up to redirect non www URLs to www pre-pended URLs. Example: joesrx.com resolves to www.joesrx.com through the Sitecore URLResolver.
What we are seeing is that if you type joesrx.com, it tries to go to https://joesrx.com before it hits the Sitecore server. Since there are no certificates on this server and https is not utilized we get a 404.
Is there something in IIS that is misconfigured? Proxy teams says it is not in their setting and network team says all of the DNS entries are correct.
As a general rule for debugging these sorts of problems, try to imagine all the elements between you and the application and then use a simple divide and conquer approach. You can also test behavior on individual levels of the path between you and the actual application.
In this case for example (from you to application code):
User
Browser
browser may do caching of redirects. Try a different browser / try incognito mode / clear cache
Browser Extensions/Settings
any extensions which make it so the browser always tries to access website(s) via https? Try with extension disabled / another browser
Proxies/Firewalls
any Proxies/Firewalls on your end which may modify requests? Can you try to access the site bypassing any proxies/firewalls, maybe from a different network connection?
Network
Web Server
Web Server Configuration / Pipelines / Resolvers / Filters / Etc.
.htaccess / IIS config / nginx config / servlet filters / (lots of options depending on your framework). Check the server
Actual application code
well.. check the code.
Example of divide and conquer, choosing the Network mid-point: Try accessing the URL with wget/curl from command-line, curl -i will also show you the headers received from the server. If you find a "Location: .." header it's clear that the server is sending a redirect. So now you only have to check Web Server / framework configuration and actual application code.
There are a few things I would check first:
Do you have rewrite rules in your web.config? They may be pattern-matching on www. and redirecting in order to enforce SSL
Do you have code in your pipelines that is attempting to enforce SSL for specific paths? The code here may not be checking the URL correctly.
In IIS, did you bind the 'www' host name to your IIS site? Or is it falling through to another site that has SSL enforced?
In case the other answers don't help, check for HTST headers such as "Strict-Transport-Security: max-age=31536000".
This HTTP header tells browsers to use only SSL for future requests (among other things).
For more info check out:
https://www.owasp.org/index.php/HTTP_Strict_Transport_Security

serving images from one domain for multiple websites

we have nearly 13 domains within our company and we would like to serve images from one application in order to leverage caching.
for example, we will have c1.example.com and we will put all of our product images under this application. but here I have some doubts;
1- how can I force client browser's to cache the image and do not request it again?
2- when I reference those images on my application, I will use following html markup;
<img scr="http://c1.example.com/core/img1.png" />
but this causes a problem when I run the website under https. It gives warning about the page. It should have been used https//c1.example.com/core/img1.png when I run my apps under https. what should I do here? should I always use https? or is there a way to switch between auto?
I will run my apps under IIS 7.
Yes you need to serve all resources over https when the html-page is served over https. Thats the whole point of using https.
If the hrefs are hardcoded in the html one solution could be to use a Response Filter that will parse all content sent to the client and replace http with https when necessary. A simple Regular Expression should do the trick. There are plenty of articles out there about how these filters are working.
About caching you need to send the correct cache-headers and etag. There are several of questions and answers on this on SO like this one IIS7 Cache-Control
You need to use HTTP headers to tell the browser how to cache. It should work by default (assuming you have no query string in your URLs) but if not, here's a knowledge base article about the cache-control header:
http://support.microsoft.com/kb/247404
I really don't know much about IIS, so I'm not sure if there are any other potential pitfalls. Note that browsers may still send HEAD requests sometimes.
I'd recommend you setup the image server so that HTTP/S is interchangeable, then just serve HTTPS Urls from HTTPS requests.

Resources