I want to load a small plugin from domain B in an iframe on a page from domain A.
The plugin from domain B uses a service worker for optimal performance.
However, the SW can't be installed because there is no service worker object when the mage is loaded inside an iframe.
Is it not supported? (and what might be the reasoning if so?)
if('serviceWorker' in navigator){
let registration = await navigator.serviceWorker.register(uri)
registration.update()
}else{
console.log('sw iframe not installed')
console.log(navigator)
}
}
By lots of trying and debugging I have found the answer:
The origin site A is hosted via http
The plugin from domain B is (obviously) via https
It seems serviceworker is blocked in an iframed https site if parent is http.
If anyone has the reasoning behind this, please let me know
(I don't see it)
Related
I'm doing some work with this right now and I have to say, it makes no sense at all to me! Basically, I have some CDN server which provides css, images ect for a site. For whatever reason, in order for my browser to stop blocking those resources with a CORS error, I had to have that server (the CDN) add the Access-Control-Allow-Origin header. But as far as I can tell that does absolutely nothing to increase security. Shouldn't the page I request which references those cross-domain resources be telling the browser it's safe to get stuff from the other domain? If that were a malicious domain wouldn't it just have the Access-Control-Allow-Origin set to * so that sites load their malicious responses (you don't have to answer that because obviously they would)?
So can someone explain how this mechanism/feature provides security? As far as I can tell the implementors fucked up and it actually does nothing. The header should be required from the page which references/requests cross-domain resources rather than from that domain being requested.
To be clear; if I request a page at domain A it would make sense for the response to include the Access-Control-Allow-Origin header white listing resources from domain B (Access-Control-Allow-Origin:.B.com), however it makes no sense at all for domain B to effectively white list itself by providing the header; Access-Control-Allow-Origin: which is how this is currently implemented. Can anyone clarify what the benefit of this feature is?
If I have a protected resource hosted on site A, but also control sites B, C, and D, I may want to use that resource on all of my sites but still prevent anyone else from using that resource on theirs. So I instruct my site A to send Access-Control-Allow-Origin: B, C, D along with all of its responses. It's up to the web browser itself to honor this and not serve the response to the underlying Javascript or whatever initiated the request if it didn't come from an allowed origin. Error handlers will be invoked instead. So it's really not for your security as much as it's an honor-system (all major browsers do this) access control method for servers.
Primarily Access-Control-Allow-Origin is about protecting data from leaking from one server (lets call it privateHomeServer.com) to another server (lets call it evil.com) via an unsuspecting user's web browser.
Consider this scenario:
You are on your home network browsing the web when you accidentally stumble onto evil.com. This web page contains malicious javascript that tries to look for web servers on your local home network and then sends their content back to evil.com. It does this by trying to open XMLHttpRequests on all local IP addresses (eg. 192.168.1.1, 192.168.1.2, .. 192.168.1.255) until it finds a web server.
If you are using an old web browser that isn't Access-Control-Allow-Origin aware or you have set Access-Control-Allow-Origin * on your privateHomeServer then your browser would happily retrieve the data from your privateHomeServer (which presumably you didn't bother passwording as it was safely behind your home firewall) and then handing that data to the malicious javascript which can then send the information on to the evil.com server.
On the other hand using an Access-Control-Allow-Origin aware browser and default web configuration on privateHomeServer (ie. not sending Access-Control-Allow-Origin *) your web browser would block the malicious javascript from seeing any data retrieved from privateHomeServer. So this way you are protected from such attacks unless you go out of your way to change the default configuration on your server.
Regarding the question:
Shouldn't the page I request which references those cross-domain
resources be telling the browser it's safe to get stuff from the other
domain?
The fact that your page contains code that is attempting to get resources from a particular server is implicitly telling the web browser that you believe the resources are safe to fetch. It wouldn't make sense to need to repeat this again elsewhere.
CORS makes only sense for Mashup content provider and nothing more.
Example: You are a provider of a embedded maps mashup service which requires a registration. Now you want to make sure that your ajax mashup map will only work for your registered users on their domains. Other domains should be excluded. Only for this reason CORS makes sense.
Another example: Someone misuse CORS for a REST-Service. The clever developer set up a ajax proxy and et voilĂ you can access from every domain on that service.
Such a ajax proxy would make no sense for a mashup, on the other way the CORS makes no sense for REST-Services, because you could bypass the restriction with a simple http-client.
I have an HTML5 app written in static html/js/css (it's actually written in Dart, but compiles down to javascript). I'm serving the application files via CDN, with the REST api hosted on a separate domain. The app uses client-side routing, so as the user goes about using the app, the url might change to be something like http://www.myapp.com/categories. The problem is, if the user refreshes the page, it results in a 404.
Are there any CDN's that would allow me to create a rule that, if the user requests a page that is a valid client-side route, it would just return the (in my case) client.html page?
More detailed explanation/example
The static files for my web app are stored on S3 and served via Amazon's CloudFront CDN. There is a single HTML file that bootstraps the application, client.html. This is the default file served when visiting the domain root, so if you go to www.mysite.com the browser is actually served www.mysite.com/client.html.
The web app uses client-side routing. Once the app loads and the user starts navigating, the URL is updated. These urls don't actually exist on the CDN. For example, if the user wanted to browse widgets, she would click a button, client-side routing would display the "widgets" view, and the browser's url would update to www.mysite.com/widgets/browse. On the CDN, /widgets/browse doesn't actually exist, so if the user hits the refresh button on the browser, they get a 404.
My question is whether or not any CDNs support looking at the request URI and rewriting it. So, I could see a request for /widgets/browse and rewrite it to /client.html. That way, the application would be served instead of returning a 404.
I realize there are other solutions to this problem, namely placing a server in front of the CDN, but it's less ideal.
I do this using CloudFront, but I use my own server running Apache to accomplish this. I realize you're using a server with Amazon, but since you didn't specify that you're restricted to that, I figured I'd answer with how to accomplish what you're looking to do anyway.
It's pretty simple. Any time you query something that isn't already in the cache on CloudFront, or exists in the Cache but is expired, CloudFront goes back to your web server asking it to serve up the content. At this point, you have total control over the request. I use the mod_rewrite in Apache to capture the request, then determine what content I'm going to serve depending on the request. In fact, there isn't a single file (spare one php script) on my server, yet cloudfront believes there are thousands. Pretty sure url rewriting is standard on most web servers, I can only confirm on lighttp and apache from my own experience though.
More Info
All you're doing here is just telling your server to rewrite incoming requests in order to satisfy them. This would not be considered a proxy or anything of the sort.
The flow of content between your app and your server, with cloudfront in between is like this:
appRequest->cloudFront
if cloudFront has file, return data to user without asking your server
for the file.
If cloudFront DOESN'T have the file (or it has expired), go back to
the origin server and ask it for a new copy to cache.
So basically, what is happening in your situation is this:
A)app->ask cloudfront for url cloud front doesn't have
B)cloudfront
then asks your source server for the file
C)file doesn't exist there,
so the server tells cloudFront to go fly a kite
D)cloudFront comes back empty handed and makes your app 404
E)app crashes and
burns, users run away and use something else.
So, all you're doing with mod_rewrite is telling your server how it can re-interpret certain formatted requests and act accordingly. You could point all .jpg requests to point to singleImage.jpg, then have your app ask for:
www.mydomain.com/image3.jpg
www.mydomain.com/naughtystuff.jpg
Neither of those images even have to exist on your server. Apache would just honor the request by sending back singleImage.jpg. But as far as cloudfront or your app is concerned, those are two different files residing at two different unique places on the server.
Hope this clears it up.
http://httpd.apache.org/docs/current/mod/mod_rewrite.html
I think you are using the URL structure in a wrong way. the path which is defined by forward slashes is supposed to bring you to a specific resource, in your example client.html. However, for routing beyond that point (within that resource) you should make use of the # - as is done in many javascript frameworks. This should tell your router what the state of the resource (your html page or app) is. if there are other resources referenced, e.g. images, then you should provide different paths for them which would go through the CDN.
I'm thinking about my website architecture that's using https.. I now have a CDN server hosting images , css and more static files.
The website itself is using HTTPS for securing sensitive costumer data. Will using the static images , loaded by for example 'http://cdn.example.com/images/test.jpg' on a website 'https://www.example.com' popup a "Loading insecure data" message?
So loading external NOT SECURED data on a SECURED website.
Will this be causing a popup warning "Loading insecure data, continue?"?
Thx!
Yes.
If a page is loaded over HTTPS then every resource it uses should also be loaded over HTTPS.
Otherwise a man-in-the-middle could replace images with misleading ones (or ones that exploit buffer overflow issues in browsers to execute code) and scripts with ones that do different things (such as leak data to the third party).
You have to load every resource over https to get rid of that warning. You can either move the resources to your server that supports encryption, or link to an external resource over https.
If you really want to load http content in https, you can follow this method using a backend handler in charge of downloading and exposing the required content with self forged links including a hash. The security issue is then fixed and you get the content accessible through https.
Dealing with HTTP content in HTTPS pages
I did this recently.
I have a raspberry pi loaded with nginx, and PHP.
I us Https to handle requests from the web to the PHP code which in turn sends http requests to my local network to assemble the page. Works well.
We've recently run into an issue with our ASP.NET application where if a user goes to ourcompany.com instead of www.ourcompany.com, they will sometimes end up on a page that does not load data from the database. The issue seems to be related to our SSL certificate, but I've been tasked to investigate a way on the code side to fix this.
Here's the specific use case:
There is a user registration page that new users get sent to after they "quick register" (enter name, email, phone). With "www" in the URL (e.g. "www.ourcompany.com") it works fine, they can proceed as normal. However, if they browsed to just "ourcompany.com" or had that bookmarked, when they go to that page some data is not loaded (specifically a list of states from the DB) and, worse, if they try to submit the page they are kicked out entirely and sent back to the home page.
I will go in more detail if necessary but my question is simply if there is an application setting I can say to keep the session for the app regardless of if the URL has the "www" or not? Buying a second SSL cert isn't an option at this point unless there is no recourse, and I have to look at a way to solve this without another SSL.
Any ideas to point me in the right direction?
When your users go to www.ourcompany.com they get a session cookie for the www subdomain. By default, cookies are not shared across subdomains, which is why users going to ourcompany.com do not have access to their sessions.
There is a useful thread discussing this issue here. The suggested solution is:
By the way, I implemented a fairly good fix/hack today. Put this code
on every page: Response.Cookies["ASP.NET_SessionId"].Value =
Session.SessionID; Response.Cookies["ASP.NET_SessionId"].Domain =
".mydomain.com";
Those two lines of code rewrite the Session cookie so it's now
accessible across sub-domains.
Doug, 23 Aug 2005
Surely you are trying to solve the wrong problem?
Is it possible for you to just implement URL rewriting and make it consistent?
So for example, http://example.com redirects to http://www.example.com ?
For an example of managing rewriting see:
http://paulstack.co.uk/blog/post/iis-rewrite-tool-the-pain-of-a-simple-rule-change.aspx
From the browsers point of view, www.mysite.com is a different site than mysite.com.
If you have a rewrite engine, add a rule to send all requests to www that don't already have it.
Or (this is what I did) add a separate IIS site with the "mysite.com" host header and set the IIS flag to redirect all traffic to www.
In either of these cases, any time a browser requests a page without the www prefix, it will receive a redirect response sending it to the correct page.
Here's the redirect site home directory properties:
And the relevant host header setting:
This fixes the issue without requiring code changes, and incidentally prevents duplicate search results from Google etc.
Just an update, I was able to fix the problem with a web.config entry:
<httpCookies domain=".mycompany.com" />
After adding that, the problem went away.
I have a site that is hosted in shared hosting environment. They use a wildcard subdomain setup and suggest using Response.Redirect to achieve the illusion of a subdomain.
Is there a way of doing this such that the "switch" takes place on the server rather than bouncing back down to the browser first?
Server.Transfer only works if I transfer to an actual resource. So redirecting from sub1.mydomain.com to www.mydomain.com/public/ does not work. I'd have to redirect to www.mydomain.com/public/mypage.aspx instead which i dont want to do.
To ensure that the "switch" takes place on the server, you could create a simple HTTP Module to intercept each request, inspect the requested URL and then forward them as needed . All your module has to do is handle the OnBeginRequest event, and then forward the request. In this way you could really have unlimited sub-domains.
Also might want add a blank host header, so that any requests for subdomains not listed get forwarded to the proper default website
If you aren't familiar with them, modules are very simple to create and work with.
Heres a link to a very similar implementation by Brendan Tompkins:
http://codebetter.com/blogs/brendan.tompkins/archive/2006/06/27/146875.aspx
You could also do some URL rewriting in the module should you need specific URL "look" behavior.