How does it appear that MDN can detect a request from an iframe on the Server Side and send no content? - iframe

Please Note: This question is not related directly to Server-side detection that a page is shown inside an IFrame, as I'm showing you an instance where it would appear that the guys at MDN (Mozilla Developer Network) are already detecting that content is being delivered to an iframe, although, if you read through this, I discuss the possibility that this isn't server-side related at all; it might be some sort of "rights" issue declared some how or in some way I don't know about. The point is to understand how something already existing works.
First of all, I do not desire to rip off MDN (Mozilla Developer Network) content as my own. I'm asking this because I'm truly puzzled by it. The guys at MDN seem to have pulled of a nice trick, and I'd like to know it, but maybe its simpler than I realized.
The code is only:
<iframe src="https://developer.mozilla.org/en-US/docs/HTML/HTML5"></iframe>
Take, for example, this fiddle:
http://jsfiddle.net/jfcox/D3UNZ/
Do you notice how there's no content in the iframe? There doesn't appear to be any content related to the request on the Chrome network tab.
I assure you, that'd work on a "normal" website, like example.org. see http://jsfiddle.net/jfcox/nPwcu/
So, I ask, what is it that they are doing to detect that a request is being made from an iframe?
Is there some Browser-Fu I don't know about? Oddly enough, that might be the case. From IE9.
To help protect the security of information you enter into this
website, the publisher of this content does not allow it to be
displayed in a frame.
Wow! Ok, so maybe it's not server-side, maybe it's all Browser-Fu. Even so, how does IE9 and these other browsers know what I don't know? What do I need to look up to learn about this?
I have my own suspicions, namely that there's some file at the root of the website like crossdomain.xml for flash that defines permissions about content usage or whatever, but I still wouldn't even know where to start if that's the case.

Turns out, it's a pretty simple copy protection. All you need to do is set a response header.
https://developer.mozilla.org/en-US/docs/HTTP/X-Frame-Options
https://datatracker.ietf.org/doc/html/draft-ietf-websec-x-frame-options-00
Yes, "frame", instead of "iframe".
:eyeroll:
I suppose the name makes sense, considering the possibility somebody could still attempt to use old HTML 4 frame tags for whatever purpose, and I would expect most browser/DOM engines have baked-in support of frame tags given HTML history. Netscape created/supported frames as early as version 2.0 and iframe was a later, purely-Microsoft invention that found wide adoption, IIRC.

Related

asp.net page with iframe with youtube video sometimes requires refresh

I have an asp.net page with an iframe in it that shows a youtube video. I have buttons that use the youtube api to control the video. I'm finding that often the video doesn't show up, and I have to do a view-refresh to make it work. This problem is limited to Internet Explorer, I believe that Google chrome does not have the same problem.
I've been told not to use iframes at all by a tech support person at the server company that hosts my site. They say people are moving away from iframes, and that I should use other methods of embedding youtube videos.
I could certainly do that, but youtube recommends using iframes, because they are more flexible.
If I could make the 'refresh' problem predictable, I would submit it to Microsoft, but I can't figure out why it happens sometimes, but not other times.
I could also force a refresh.
My question is, does anyone have a clue why this bug is happening.
Thanks.
This happens a lot with IFRAME content. I believe it's to do with the way that Internet Explorer caches stuff, but to be perfectly honest, I'm not sure on the details.
What I can share is an old trick we used to use with dialog boxes.
Say your src attribute is "http://www.youtube.com/watch?v=1wL7RHoDnxs"
You can bang an extra parameter onto that GET string, like:-
"http://www.youtube.com/watch?v=1wL7RHoDnxs&doesntmatter=203933"
See the extra doesntmatter parameter there? The value is random, or based on a timestamp. Your choice. The trick is to give a different URL to IE each time. This'll stop it from trying to use a cache for your IFRAME content.

Protocol-relative URL in CSS for multiple subdomains

Our php-driven website has recently added ssl certificates to support the https protocol and we are having problems with IE6 through IE8 although our pages do not have resources called through http.
I have read this post : http://paulirish.com/2010/the-protocol-relative-url/
So, basicaly, I need to replace all the
background: url('/images/whatever.gif');
With :
background: url('//www.mydomain.com/images/whatever.gif');
I'm not quite a fan of using my domain name across several hundred css files to start with, but suppose I do : what would be the best practice to do so for my development, test and staging environments which are all on different subdomains than the production site. I would need to use dynamic representations of the domain name in the css files, most probably driven from some sort of config file, but how ?
You don't have to add your hostname to use protocol-relative URLs. The form you're already using is protocol-relative, because it doesn't specify a protocol.
Can you detail the problems you are having? Have you confirmed with a test that the URL with a domain name will solve your problem?
PS: If you have hundreds of CSS files, you'll probably be happier with a dynamic generation system anyway, but that's a separate matter.
The problems are popups in IE6, 7, 8 that say there is mixed content in the page (which should be http resources included in an https page). Chrome, FF4 and up and IE9 do not show those popups, and this is correct. There are no http included resources.
Several blog posts seem to point to background urls as the source of this problem. One of the posts (http://blogs.msdn.com/b/ieinternals/archive/2009/06/22/https-mixed-content-in-ie8.aspx) has a comment from Eric Law at MSFT, who states :
The debugger reports that the following is the URL that is triggering
the prompt:
"about:/images/lightview/inner_slideshow_play.png"
Of course, that URL doesn't actually exist in your markup. It looks
like there's dynamic creation of an IFRAME and injection of content
into that frame. The default URL for an empty frame is about:blank,
which leads to the prompt.
and ...
Other quirks to be aware of: In IE6, we treat "about:blank" as
insecure content, as well as "javascript:" and "res:". In IE7, we
fixed the "about:blank" case, but we have not (yet) changed javascript
and res.
So the problem is known and confirmed by MSFT for their older browsers, which create an IFRAME and inject content that then generates the error.
Most workarounds I have stumbled upon point to using protocol-relative urls, like in the first url I showed. I'm not sure you can consider 'background: url('/images/whatever.gif');' as a protocol-less call, because of this infamous IE6 to 8 bug.
--Edit : Working on a solution. We have found this in our javascript files and it seems it has been the real problem from the beginning :
<input target="_blank"class="sub" type="button" style="background-image:url(../images/name.gif);">
Ok ! Got it.
By the way, if ever anybody runs accross the need to find exactly what problems they are having with IE6, IE7 or IE8 on https webpages that are incorrectly reported as containing mixed content, use this script : http://www.enhanceie.com/dl/scriptfreesetup.exe
So in the end it was the button I talked about in the last post. Changing it to an imported class, swapping background-image for just background and getting rid of the ../ at the beginning did the trick.
Thanks all for your help, I'll still flag an answer on Ned's input, since it was of some help.

Bust iFrames accurately when implementing DiggBar or FacebookBar?

Understanding all the security and UI concerns with iFrames, I am implementing a toolbar similar to the DiggBar or FacebookBar.
A top bar persists across the top 30 pixels of the screen, and an iFrame displaying external content fills up the remainder of the page.
When users close the toolbar, and thereby exit my little site to go directly to the third-party site, how can I bust the iFrame properly and display the right page? If the user clicks on even one link in the iFrame, I end up showing the wrong page.
Given my understanding of browser security, and coupled with how DiggBar and FacebookBar fail to do this accurately, I'm guessing it cannot be done.
But I was hoping the Stackoverflow coders are smarter and might have an answer? :)
Thanks!
You can't. Because of browser cross site-scripting security, your bar which sits in its own frame cannot access any other frames and determine their URLs.
Not to mention that'll you'll be sued by website owners for numerous things and that you'll piss off every hacker out there.
This is the last thing you want to do if you'd like to NOT in your our office as that one guy who wanted to include everyone elses web site in their website with the owners permission.
I wouldn't speak up at any of the conventions either.
I've also added the question: "Have you ever written code or worked on code that frames other sites?" to my list of questions to use to weed out job applicants.

CSS changes not reflecting on site

Whenever we make changes to the CSS, it generally takes 24 hours to reflect those changes on my site. I have tried clearing the server cache and browser cache but it doesn't help too. Is there any other way to make the CSS changes reflect immediately after updation?
it happens in all the browsers... when i check it in the browser , i can access my css file with two paths eg : i store my css in folder named "Cssfolder" and my css name is say 135.css
So when i access the folder paths, Cssfolder/135.css & cssfolder/135.css, one of the path shows me latest css whereas other one shows me old css.Notice the "c" is captital in one path whereas small in other path.
Thanks.
I've found this to be a pretty common problem in a lot of my projects. I would suggest two things...
If it's just an app that you are working on you can use the CSS Cachebuster during development.
Following the idea behind the Cachebuster I have found that often adding the timestamp of the CSS file as a query string off of the CSS link will help in telling the browser that the file is different... something like... whatever.css?12212009035543
You might want to use a monitoring tool, like Live Http Headers for Firefox, to see the requests and responses to and from the server. This usually solves a lot of problems for me. Take a look at the "Expire" headers and conditional requests (like: "If-modified-since"). This said, take a look at server and client local times and timezones - it might be that they differ significantly and conditional GET requests "seem to be" handled correctly, because of future or otherwise mangled timestamps.
You can force to load the current css directly from the server with appending a random unique value to the url, like http://example.com/Cssfolder/135.css?983274928374 and http://example.com/cssfolder/134.css?08973249827. There's no way that this would ever get cached unless you use the same random value twice.
This way you learn where to look further for the solution to your problem: At the server, the ISP/a proxy or your browser.
You really need to see whether this is server side or client side. If the server is still serving the old CSS then clearly you've got no chance on the client side.
I've occasionally seen times where I've had to show the CSS in the browser, and then next time I've been to the real page, it's used that new CSS. Usually just hitting refresh does it.
Do you have any web caches like Akamai involved anywhere?
If you try to go to the CSS page from a computer which has never seen the old version, which version does it show?
EDIT: Changed answer to reflect edits in question.
I have been dealing with this issue in the past, and ended up writing a httpmodule to deal with it.
It's pretty simple, it just finds all script/css links in head tag (they now need to have runat=server) and appends the assembly version number to the link, in the same way as Tim K describes. This way im sure my clients always fetches the newest css/scripts when my app is updated in production, and never have to deal with this issue again.
Maybe Internet Service Provider cache, as in this case?
I was perplexed by this issue then someone said Ctrl+F5. Worked for me :)
When I am developing and I need to be sure that I am seeing changes as I work, I stick the css in the page ie
<style type="text/css">
/* your css */
</style>
Or you could constantly change the name of the css file itself, not very useful in a production environment, but perhaps okay while developing.
I know it doesn't solve the problem, but for developing it is okay.

Using ASP.Net, is there a programmatic way to take a screenshot of the browser content?

I have an ASP.Net application which as desired feature, users would like to be able to take a screenshot. While I know this can be simulated, it would be really great to have a way to take a URL (or the current rendered page), and turn it into an image which can be stored on the server.
Is this crazy? Is there a way to do it? If so, any references?
I can tell you right now that there is no way to do it from inside the browser, nor should there be. Imagine that your page embeds GMail in an iframe. You could then steal a screenshot of the person's GMail inbox!
This could be made safe by having the browser "black out" all iframes and embeds that would violate cross-domain restrictions.
You could certainly write an extension to do this, but be aware of the security considerations outlined above.
Update: You can use a canvas utility function to get a screenshot of a page on the same origin as your code. There's even a lib to allow you to do this: http://experiments.hertzen.com/jsfeedback/
You can find other possible answers here: Using HTML5/Canvas/JavaScript to take screenshots
Browsershots has an XML-RPC interface and available source code (in Python).
I used the free assembly UrlScreenshot.dll which you can download here.
Works nicely!
There is also WebSiteScreenShot but it's not free.
You could try a browser plugin like IE7 Pro for Internet Explorer which allows you to save a screenshot of the current site to a file on disk. I'm sure there is a comparable plugin for FireFox out there as well.
If you want to do something like you described. You need to call an external process that prints the IE output as described here.
Why don't you take another approach?
If you have the need that users can view the same content over again, then it sounds like that is a business requirement for your application, and so you should be building it into your application.
Structure the URL so that when the same user (assuming you have sessions and the application shows different things to different users) visits the same URL, they always see same thing. They can then bookmark the URL locally, or you can even have an application feature that saves it in a user profile.
Part of this would mean making "clean urls", eg, site.com/view/whatever-information-needed-here.
If you are doing time-based data, where it changes as it gets older, there are probably a couple possible approaches.
If your data is not changing on a regular basis, then you could make the "current" page always, eg, site.com/view/2008-10-20 (add hour/minute/second as appropriate).
If it is refreshing, and/or updating more regularly, have the "current" page as site.com/view .. but allow specifying the exact time afterwards. In this case, you'd have to have a "link to this page" type function, which would link to the permanent URL with the full date/time. Look to google maps for inspiration here-- if you scroll across a map, you can always click "link to here" and it will provide a link that includes the GPS coordinates, objects on the map, etc. In that case it's not a very friendly url but it does work quite well. :)

Resources