Why is a iFrame used for Silent Authentication? - iframe

I've read on multiple pages that hidden iFrames are used for silent authentication, but I couldn't figure out yet why. What are the benefits to using an iFrame over directly sending a GET Request to the Identity Server?

It's a convenient way to get around the Single-Origin-Policy in the browser when we want to do cross-origin requests (across domains).
This was a popular approach when we didn't have the more modern CORS feature to get around the SOP-policy.

Related

Why do we still use HTTP instead of WebSockets for building Web Applications?

Recently I dived into the topic of WebSockets and built a small application that utilizes them.
Now I wonder why HTTP based API'S are still being used, or rather, why they are still being proposed.
As far as I can see there is nothing I can't do with WS that would be possible via HTTP, but the other way round I gain a lot of improvements.
What would be a real world example of an application that takes more benefits from a HTTP powered backend than from a WS one?
#Julian Reschke made good points. The web is document based, if you want your application to play in the WWW ... it have to comply with the game rules.
Still, you can create WS based SPA applications that comply with those.
Using the HTML5 history API, you can change the URL in shown by the browser without causing navigation. That allows you to have a different URL in your address bar depending on the state of your app, enabling then bookmarking and page history. The plugin "ui-router" for AngularJS plays very well here, changing the URL if you change the state programmatically, and viceversa.
You can make your SPA crawlable.
But still you want to use HTTP for some other things, like getting resources or views and cache them using HTTP cache mechanisms. For example, if you have a big application you want some big views to be downloaded on demand, rather than pack everything in a big main view.
It would be a pain to implement your own caching mechanism for HTML to get views and cache them in the local storage for example. Also, by using traditional HTTP requests, those views can be cached in CDNs and other proxy caches.
Websockets are great to maintain "connected" semantics, send data with little latency and get pushed data from the server at any time. But traditional HTTP request is still better for operations that can benefit from distribution mechanisms, like caching, CDN and load balancing.
About REST API vs WebSocket API (I think your question was actually about this), it is more a convenience than a preference. If your API has a big call rate per connection... a websocket probably makes more sense. If your API gets a low call rate, there is no point in using WebSocket. Remember that a Websocket connection, although it is lightweight, it means that something in the server is being held (ie: connection state), and it may be a waste of resources if the request rate do not justify it.
Bookmarking? Page history? Caching? Visibility to search engines?
HTTP and WebSockets are two Web tools originated to accomplish different tasks.
With HTTP you typically implement the request/response paradigm.
With WebSockets you typically implement an asynchronous real-time messaging paradigm.
There are several applications where you need both the paradigms.
You can also try to use WebSockets for request/response and use HTTP for asynchronous real-time messaging paradigm. While the former makes little sense, the latter is a widespread technique, necessary in all the cases where WebSockets are not working (due to network intermediaries, lack of client support, etc.). If you are interested in thus topic, check out this other answer of mine, which tries to clarify the terminology related to these techniques: Is Comet obsolete now with Server-Sent Events and WebSocket?

Different methods of embedding a page

We provide a third-party site to clients. Frequently when we are handling an RFP, we are asked if it is possible to embed our site within our client's site. Many of our prospective clients have unusual limits or requests, such as 'do not use iframes'.
To that end, I've been asked to ensure that our upcoming redesign of our site makes it practical to embed in client's sites in at least two ways.
The methods of embedding a full-functioning website (as opposed to a cross-site image or piece of static content) within another are as follows:
iframe - Much used, frequently maligned, and some of our
previous RFPs have specifically excluded this as a possiblility.
Object/Embed tags - going way back, it's possible to embed a
full-functioning HTML page into another the same way you would embed
a flash object.
Ajax - Supposedly capable of loading a full
site into an HTML object (such as a div tag), but there seem to be additional security hoops to jump through, due to the dangers of cross-domain requests.
Are there other methods for placing a full site within another from a different domain? Are there any caveats or limitations to any of the above three (for instance, our site uses AJAX calls for login and to update some user-defined settings, will those all function correctly in each of the above embed strategies?) that I might be unaware of?
Thanks in advance.
X-Frame-OptionsResponse Header
If you are embedding your site to somebody else's site, you must be careful about the X-Frame-Options response header. Make sure your site does not send SAMEORIGIN as the value for X-Frame-Options. If you do, it will cause problems for iframes and embedded objects. You can do two things:
You absolutely do not send the header: Any site will be able to display your site in an iframe or as an embedded object. This can cause security problems like clickjacking. See this article for more info and defense on clickjacking.
You can make sure only the site you authorize will be able to embed your site: This is done by sending ALLOW-FROM url value for X-Frame-Options header. You can sniff the HTTP-referer to identify which site is requesting your page. This is really secure than option 1 (unless users' browsers are malicious, of course). NOTE: Not all browswers support ALLOW_FROM. See linked reference for supported browsers
Same Origin Policy
Same Origin Policy will not affect you as far as your site and your clients site do not access each others DOM.
CORS
Cross-Origin Resource Sharing should be considered if a script from your clients' website makes an AJAX request (XmlHttpRequest) to the resources in your site. But as far as your question is concerned, this is not the case, I think.
I gave an answer explaining CORS some time back, you can read it if you want to get a basic understanding of CORS.
Plugins for third party sites
It seems like you are trying to embed some functionality in clients site. Consider offering site plugins like those Facebook and Disqus does.
I am not sure if Same-Origin Policy or CORS applies here. I will find that out and get back to you.
Note: X-Frame-Options, Same-Origin Policy, and CORS is implemented by browsers. There's nothing you can do if the users browser does not implement these things, or if the browser is hacked to not respect these security policies.

Is it possible to add logic to CDN

Is it possible to serve two different pages based on the user agent.
I want to serve pagename-iphone.html if the user agent matches iPhone and pagename-other.html for all other user agents. I want all pages on the site to follow this pattern. Is it possible to do this at the CDN level (cloud front, akamai etc).
thanks for your help!
I think what you are after is User Agent based caching, like vary: User-Agent.
In theory, a server provide Cache service can definitely do so, however, as far as I can tell CloudFront and most of other major CDN providers don' support so.
The basic reason is very straightforward that the currently there are too many User-Agent header, and it's almost unique on every single browser, not mention the different versions of the same browser. If you purely do things based on the whole User-Agent, you will lost the benefit of CDN cache most of the time.
Some of the more advanced servers allow you to add condition based on headers, for example, in Varnish, you can even add if,else logic for returning different values. But this is not available for majority of CDNs.
In the other hand, you should not rely on CDN to return different html pages. CDN is more commonly used to accelerate artifacts (js/css/imgs) instead of the whole page.
EDIT: Actually, I just recieved an email from AWS mentioned now CloudFront starts to support this:
Mobile Device Detection: You can now cache and deliver customized
content to your viewers on different devices (e.g. mobile vs. desktop)
based on the value of the User Agent header.
Please refers to: http://aws.amazon.com/about-aws/whats-new/2014/06/26/amazon-cloudfront-device-detection-geo-targeting-host-header-cors/ for more details.

Why do we need CDNs when HTTP proxies already cache content?

CDN seems to be a popular way of improving an app's performance.
But why are they required when you consider that HTTP proxies on the web can cache the content already ?
CDNs are a kind of web cache, just one operated under your auspices, rather than the web user's. You get full control of the freshness of your content, whereas you don't have any control of the proxy servers "out there".
The user's proximity to your web server has an impact on response times. Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective. But where should you start?
Read full article at https://developer.yahoo.com/performance/rules.html

How should I build a good (web) API

I'm going to build an API for a web app and I'm interested in what people can suggest as good practices.
I'm already planning to make it versioned (version 1 can only control certain aspects of the system, version 2 could control more, but this may need a change in the way authentication is performed that would be incompatible with version 1), and the authentication will be distinct from the standard username/password people use to log in (if someone does use a malicious tool it won't open them up to full impersonation, just whatever the api allows).
Does anyone have further ideas, or examples of sites with particularly good APIs you have used?
Read the RESTful Web Services book, which give you a good overview of how to use REST in practice, and get to up to speed quickly enough to get started now, with some confidence. This is more useful than just looking at an existing API, because it also discusses design choices and trade-offs.
1) Bake the version number directly into the URL rather than passing it as a parameter, since that gives you complete freedom to change the organization of your API namespace with each version bump.
2) Keep your URL rewriting rules (if any) as simple/lean as possible (but no simpler), while making your URLs as beautiful as possible (but no more).
3) Always look for the best HTTP status code you can find for each response (and don't forget about 202 and 207, for example).
4) Implement fascist parameter validation logic, and informative error messages.
5) Use HTTP request headers where appropriate instead of parameters (like Accept, for example, to allow clients to specify the desired data format of the response).
6) Organize your "nouns" in such a way that the URLs used by different client audiences are separated near the "root" of your URL tree (this makes it easier to enforce different authentication mechanisms for those different audiences if needed, or even map different portions of your URL tree to different servers).
7) If you're serving regular web pages off the same domain as your APIs and use the same authentication credentials, require an X-Requested-With header in your API requests so as to avoid XSRF vulnerabiities.
I would take a look at proven APIs:
YouTube API
Twitter API
There's a lot of argument about whether these APIs are "good" but I think their success is demonstrated, and they're all easy to use.
Use REST.
RESTful web services architecture is easy to implement and uses the strengths and semantics of HTTP for what they were intended. It's resource-oriented, just like the web itself.
Amazon Web Services, Google and many others offer REST APIs to interact with their products.
Use REST.
Read up on standards for APIs, or copy the ideas from one of the popular ones.
Be careful when authenticating users.
Start very very simple.
Build a site that uses your API (even if it's not useful) to check things work. Perhaps you could build a mobile version of the site or something that forces you to use the API in a lot of depth.

Resources