I had been going through Iframe related questions and got stuck with this.
They break the one-document-per-URL
paradigm, which is essential for the
proper functioning of the web (think
bookmarks, deep-links, search engines,
...).
Moreover, if I ask you to tell me the difference between Iframes and Frames, what is the easiest way to explain it?
Frame allows you to display multiple documents on page.
They all have distinct URLs which are hidden from you. So, you cannot bookmark them separately.
An iframe is a frame in an otherwise normal document.
Normal frames are placed in a document which consists entirely of frames (plus, hopefully, some fallback content from user agents that don't support them).
Related
Given a website/blog's RSS feed link, is there any way to get that site's entire RSS history (all its blog posts EVER) in a single XML file?
Is this something that is only possible from the other end (ie. a site publishes it's entire blogroll history as RSS)? In which case, how is this achieved?
Thanks!
S
RSS is just another way of expressing the data. It depends entirely on the site. If the site provides a way for you to specify how many items you want (which is unlikely), then you should know that that won't work on other sites.
Technically speaking, formatting the data in RSS is no different than formatting it in HTML. For example, many sites (including this one), need to represent some sequential data (questions in SO's case) on a page in HTML. To do this, the site will iterate through some data source (like a database), and output HTML so your web browser can render it, until it hits some limit. Knowing that limit is impossible, as it depends on the site. This is exactly what RSS does: it iterates through a data source, spitting out XML as it goes along. Again, knowing the limit is not possible.
Is this something that is only possible from the other end ...? In which case, how is this achieved?
If you can change how your site generates the RSS, simply remove the limit. I know this is vague, but it really depends on the implementation. There are dozens of RSS implementations, all different, and all behaving differently.
So my point is, nothing will work universally, you have to change the site itself to modify that behavior.
You are right there. The site has to publish its entire history, otherwise you can't get it. Doing it on server side, if you have access to the database, its quite easy. Just dump all the rows as XML. It actually takes effort to filter and limit the xml. How you can do it on blogging platforms? You could use plugins that allow you to do this
I am about to implement a function that loads potentially large set of data (~1000 rows with ~10 columns). I am planning to implement a infinite scolling solution (ajax, jQuery, asmx) as a performance measure. However, if a user has javascript disabled or the googlebot comes a-crawling, I would like to generate the entire set of data all as once, so that no data becomes inaccessible for either of those two scenarios.
I'm not sure what approach to use here. Should I look towards the noscript-tag perhaps?
In my experience if you're expecting a 1000 rows and expect any sort of traffic you would need to offer two scenarios.
I would use the noscript tag and then offer a paginated view for non js users. Or, you can do as I have done in the past and simply explain that your application requires that javascript be left on (also through the noscript tag). Anyone that runs around the internet with javascript turned off is most likely going to be used to the internet not working the way it should, or only getting a partial experience.
Try to use a client-side pagination class with JS,
its lightweight, very user friendly and if browser doesnt submit JS,
No problem, he will see a huge data table :)
http://en.newinstance.it/2006/09/27/client-side-html-table-pagination-with-javascript/
I understand that frames are a lot more typing work to implement than Iframes, and that they require a lot more styling than Iframes. I am currently working on a website which must download some content (in fact, an entire set of webpages) from another website, one - by - one of course depending on the user's action on the main website. Iframes seem to be a short and rowdy way to implement this requirement, but what I am worried about is performance and integrity.
I would like some advice on what I would rather use when the following criteria is met:
The pages that must be downloaded onto my webpage are quite large (width and height)
Contains multiple images
Experiences occasional downtimes (maintainence)
any ideas for a man in wonder?
At this point, go with iFrames:
iFrame is HTML5. Frameset is obsolete in HTML5.
You have to load pages into each Frameset. iFrames can be embedded anywhere in a document.
You can style, hide, resize either, but iFrames are much easier to work with in this regard.
I've seen cases where the developer went with Frameset because he couldn't get the iFrame to size properly, but this isn't too big a deal with a little Javascript (if even that). The only reason to use a Frameset is if you don't fear it's eventual deprecation with modern browsers, and/or if you can't get iFrames to size the way you want based on the content you're integrating and need a quick solution.
If this is about display of 3rd party data in your site, you could use data feeds from the other sites if they're available, or use a screen scraper to extract the information you need, then display it in your own way on the page.
Unless it needs to look exactly like to other page.
Check out this link on screen scraping for ASP.NET
I run an online literary journal which leads to an indexing problem--our content is not "about" literature -- it is literature. As such, Google is really bad at identifying what's going on, and due to the very low keyword density we have to try and work with, I've been looking for ways to slash interface text and turn it into iconography where possible.
I've been looking for a way to do the same with our post dates, but it's been a long search. I stumbled across the idea of using CSS generated content content:attr(id) to substitute the ID attribute of an invisible image into the page itself.
This works on the display level, however, I haven't been able to track down anything conclusive on whether this interface-only text will still get indexed, or whether we'll be able to move away from months and days of the week being our most-frequent keywords. I know Google will still see it; anyone know if it'll "count"?
As far as I'm aware, the 'best' way to ensure something is hidden from a search engine is to either load it via AJAX or (shudder) include it with flash.
If you feel that the non-content aspects of your site are adversely affecting your site's standing in the various search engines, you could load these elements via AJAX.
Only if you really think these elements are seriously affecting your position.
Below is an image describing areas of this page that one could conceivably post-load via AJAX, if one was overly concerned about their impact on SEO:
I know this doesn't specifically answer your question, it's a suggestion for an alternative way to tackle your issue.
I'm implementing "news" section in asp.net website. There is a list of short versions of articles on one page and when you click one of the links it redirects you to a page with a full article. The problem is that the article's text on the second page will come from database but the articles may vary - some may have links, some may have an image or a set of images, may be differently formatted etc. The obvious solution that my friend have come up with is to keep the article in the database as html including all links, images, formatting, etc. Then it would be simply displayed on the second page. I feel this is not a good solution as if, for example, we decide to change the css class of some div inside this html (let's say it is used in all articles), we will have to find it and change in every single record of the articles table in our database. But on the other hand we have no idea how to do it differently. My question is: how do you usually handle something like this?
I personally don't like the idea of storing full html in the database. Here's an attempt at solving the problem.
Don't go for a potentially infinite number of layouts. Yes all articles may be different but if you stick to a few good layouts then you're going to save yourself a lot of hassle. These layouts can be stored as templates e.g ArticleWithImagesAtTheBottom, ArticleWithImagesOnLeft etc
This way, your headache is less as you can easily change the templates. I guess you could also argue then that the site has some consistency in layout.
Then for storage you have at least 2 options:
Use the model-per-view approach and have eg ArticleWithImagesAtTheBottomModel which would have properties like 1stparagraph, 2ndparagraph, MainImage, ExtraImages
Parse the article according to the template you want to use. e.g look for a paragraph break if you need to.
Always keep the images separate and reference them in another column/table in the db. That gives you most freedom.
By the way, option #2 would be slower as you'd have to parse on the fly each time. I like the model-per-view approach.
Essentially I guess I'm trying to say beware of making things to complicated. An infinite number of layout means an infinite number of potential problems. You can always add more templates as you go if you really want to expand, but you're probably best off starting with say 3 or 4 layouts.
EDITED FROM THIS POINT:
Actually, thinking about it this may not be the best solution. It could work depending on your needs, but I was wondering how the big sites do it. If you really need that much flexibility, you could (as I think was sort of suggested) use a custom markup. Maybe even a simplified or full wiki markup. I'd still tend toward using templates in general, but if you need to insert at least links and images then you can parse for those.
Surely the point of storing HTML with logically placed < div >s is that you DON'T have to go through every bit of HTML you store to make changes to styles?
I presume you're not using inline styles in your stored HTML, and are referencing an external CSS file, right?
The objection you raise to your colleague's proposal does not say anything about the use of a DB. A DB as opposed to what: files? Then it's all the same. You want to screw around with the HTML, you have to do it on "every single record." Which is not any harder than "on every single file." Global changes are a bitch unless you plan for it by, say, referencing an external CSS. But if you're going to have millions of news articles, you had better plan on versioning the CSS as well.
Anyway, the CMSes do what you're thinking of doing. Using a DB is a fine way to go. How to use it would depend on knowing the problem more intimately.
Have you looked into using free content management systems? I can think of a few good ones:
Joomla
Drupal
WordPress
TONS of others... just do some googling.
Check out this Wiki article: http://en.wikipedia.org/wiki/List_of_content_management_systems