Computer blocks the display of an iframe - iframe

My company and I have developed a service consisting of two web applications:
A collaborative platform (hereinafter called Platform) accessible through an address similar to online.contoso.com, written in ASP.NET and React and running on a Windows Server 2016 server. This platform authenticates and authorizes users and allows them to perform various operations;
A web application (hereinafter called App) accessible through an address similar to online.contoso.com:44312, written in Django and jQuery and running on the same previous server;
Platform exposes App through the following iframe:
<iframe src="https://online.contoso.com:44312?lng=en" id="PlanningFrameId" width="100%" height="645px" style="display: block; border: 0px;"></iframe>
App uses the following Content-Security-Policy:
Content-Security-Policy: frame-ancestors 'self' *.contoso.com online.contoso.com
Some of our customers declare that they are unable to load App. For example, today two of our customers declare that one is able to see App and the other is not. The two users use the same internet connection but two different computers (both Macs). The customer who does not see App has performed a check with both Safari and Google Chrome. Is there any reason why a computer can block certain iframes (both Mac and Windows)?
We have performed tests on both a Platform and on the App and the applications work correctly.
Thanks

The strange thing is not that on some PCs the application does not work, but that on some it does.
The *.contoso.com host-source allows https://*.contoso.com:443 on https: pages and allows http://*.contoso.com:80 on http: pages (only standard ports are allowed by default).
Therefore you have to add https://online.contoso.com:44312 source into policy:
Content-Security-Policy: frame-ancestors 'self' *.contoso.com https://*.contoso.com:44312
Note: online.contoso.com is excessive because it's covered by *.contoso.com.

Related

Pivotal Cloud foundry - Spring Boot Actuator not working with Pivotal Apps Manager

I'm currently deploying a spring boot 1.5.1 application to pivotal cloud foundry. The Apps manager is displaying the Spring icon but i cant configure the log level or see any of the settings. I'm getting a browser 'mix content exception'. Apps manager is trying to access /cloudfoundryapplication/info over http instead of https and the browser is blocking the request. Is there a setting to force Apps manager to only use https?
Our team encountered a similar issue. We feel it has nothing to do with the apps manager but rather as to how our app behaves.
In our case we had a bad configuration which was causing the URLS getting built as http when httpRequest.getScheme() was being called.
server.tomcat.internal-proxies: <ips other then your proxy>
Correcting this property in our case by letting it to default as defined here let the getScheme to be returned as https and there by when the call being made to /cloudfoundryapplication/info the scheme got built as https.
Also another suggestion made by one of our colleague which also resolves this issue but would not address the root cause is - fronting your application(highest precedence) with ForwardedHeaderFilter - this causes the X-FORWARDED-* headers to be available in your httpServletResquest as described here

How to enable simple CORS on nginx

I installed Nginx on my laptop. My web server contains DASH streaming on-demand using the dash.js player which only hosted on localhost. I want to restrict only DASH dataset from localhost that can be used in that player. Can I use CORS for my purpose? I tried adding
location /{
add_header 'Access-Control-Allow-Origin' 'http://localhost';
}
but still any DASH dataset can still use the player which hosted on localhost. How to enable simple CORS features on Nginx? Is my understanding about CORS is wrong?
Thanks
I want to restrict only DASH dataset from localhost that can be used in that player. Can I use CORS for my purpose?
Not really. CORS is used for getting at resources cross-domain. If a player can natively play DASH (which none of the browsers do currently), then the content will play on any page, CORS support or not. The way DASH players work in-browser today is by loading the resources via XHR requests and sending the data with the media source extension API. To do this, the CORS headers are needed.
Cross-origin request blocking isn't really meant to prevent access to a resource. It's to prevent scripts on one page from accessing resources belonging to another page, effectively impersonating a user. Access-Control-Allow-Origin headers enable other pages to access those resources by effectively saying that the resource queried is safe for use.
If you want to actually block access to something, you should use allow/deny. http://nginx.org/en/docs/http/ngx_http_access_module.html

Downsides of 'Access-Control-Allow-Origin: *'?

I have a website with a separate subdomain for static files. I found out that I need to set the Access-Control-Allow-Origin header in order for certain AJAX features to work, specifically fonts. I want to be able to access the static subdomain from localhost for testing as well as from the www subdomain. The simple solution seeems to be Access-Control-Allow-Origin: *. My server uses nginx.
What are the main reasons that you might not want to use a wildcard for Access-Control-Allow-Origin in your response header?
You might not want to use a wildcard when e.g.:
Your web and let’s say its AJAX backend API are running on different domains, or just on different ports and you do not want to expose backend API to whole Internet, then you do not send *. For example your web is on http://www.example.com and backend API on http://api.example.com, then the API would respond with Access-Control-Allow-Origin: http://www.example.com.
If the API wants to request cookies from client it must not send Access-Control-Allow-Origin: *, but its value must be the value of the origin from the actual request.
For testing, actually adding entry in /ets/hosts file for 127.0.0.1/server-public-ip dev.mydomain.com is a decent workaround.
Other way can be to have another domain served by nginx itself like dev.mydomain.com pointing to the same/test-instance of backend servers & static-web-root with some security measures like:
satisfy all;
allow <YOUR-CIDR/IP>;
deny all;
Clarification on: Access-Control-Allow-Origin: *
This setting protects the users of your website from being scammed/hijacked while visiting other evil-websites in a modern-browser which respects this policy (all known browsers should do).
This setting does not protect the webservice from scraper scripts to access your static-assets & APIs at rapid speed - doing bruteforce attacks/bulk downloading/causing load etc.
P.S: (1) For development: you can consider using a free, low-footprint private-p2p vpn-like network b/w your development box & server: https://tailscale.com/
In my opinion, is that you could have other websites consuming your API without your explicit permission.
Imagine you have an e-commerce, another website could do all the transactions using their own look and feel but backed by you, for you, in the end, it is good because you will get the money in the end but your brand will lose its "recognition".
Another problem could be if this website would change the sent payload to your backend doing things like changing the delivery address and other things.
The idea behind is just to not authorize unknown websites to consume your API and show its result to users.
You could use the hosts file to map 127.0.0.1 to your domain name, "dev.mydomain.com", as you do not like to use Access-Control-Allow-Origin: *.

Routing to multiple ASP.NET applications from another application

I have multiple ASP.NET applications running on a single IIS server like below.
HR app - 223.34.56.32:81
Accounting app - 223.34.56.32:82
CRM app - 223.34.56.32:83
Now, I do not my end users to remember or bookmark all these URLs. Another problem is that these ports (81, 82, 83) are not IANA approved so I do not want to expose them to the end users.
I want to build another routing application (using ASP.NET or nginx), which will do the following
223.34.56.32/HRAdmin.aspx requested - route the request to 223.34.56.32:81/HRAdmin.aspx. End user will see 223.34.56.32/HRAdmin.aspx in his browser
223.34.56.32/CRMHome.aspx requested - route the request to 223.34.56.32:83/CRMHome.aspx. End user will see 223.34.56.32/CRMHome.aspx in his browser
My clients have not agreed to use host headers. I have to make the apps accessible using this ugly looking IP.
I am a noob in this sector. I do not know whether this is actually possible. And what technology can be used to accomplish this?

How Selenium WebDriver overcomes Same Origin Policy

How Selenium WebDriver overcome same origin policy?
Same origin policy problem is in Selenium RC
First of all “Same Origin Policy” is introduced for security
reason, and it ensures that content of your site will never be
accessible by a script from another site. As per the policy, any code
loaded within the browser can only operate within that website’s
domain.
--------------------------------------------------------------------------------- ----------------------------------------------What it did???
Same Origin policy prohibits JavaScript code from accessing elements from a domain that is different from where it was launched.
Example, the HTML code in www.google.com uses a JavaScript program
"testScript.js". The same origin policy will only allow testScript.js to access pages within google.com such as google.com/mail, google.com/login, or google.com/signup. However, it cannot access pages from different sites such as
yahoo.com/search or fbk.com because they belong to different domains.
This is the reason why prior to Selenium RC, testers needed to install local copies of both Selenium Core (a JavaScript program) and the web server containing the web application being tested so they would belong to the same domain. ------------------------------------------------------------------------------------------------------------------------------------ How it is avoided???
To avoid “Same Origin Policy” proxy injection method is used, in
proxy injection mode the Selenium Server acts as a client
configured HTTP proxy , which sits between the browser and application
under test and then masks the AUT under a fictional URL
Selenium uses java script to drives tests on a browser; Selenium injects its own js to the response which is returned from aut. But there is a java script security restriction (same origin policy) which lets you modify html of page using js only if js also originates from the same domain as html. This security restriction is of utmost important but spoils the working of Selenium. This is where Selenium server comes to play an important role.
Before Selenium WebDriver, Selenium was "Javascript Task Runner". It would set itself up as a server (locally), and open a browser pointed to the Selenium server running locally. So the browser is now talking to the Selenium Server running locally.
This is a problem though, because the browser is getting a script from Selenium which tells it that it wants to fetch resources from http://websitetotest.com. But the browser got this script from http://127.0.0.1:9000/selenium (for example). The browser says "hey this script came from local host and now it's requesting a resource from some outside website. This violated the same-origin-policy.
WebDriver came along and created a proxy to trick the browser into thinking that it is talking to the same server where both Selenium and the websitetotest are "located". Abhishek provided a concise explanation on this.
This might be a late reply but, if you are referring to selenium webdriver and not selenium RC then the answer is you dont have to worry about same origin policy in case of webdriver since each browser has its own webdriver.This is the whole advantage of webdriver as opposed to RC i.e no selenium core injection into the browser and no middleware client server between the browser and AUT.Webdriver provides a native OS level support in controlling the browser automation.

Resources