Is there a web API to find containing block of an element?
Containing block official definition is mentioned here https://www.w3.org/TR/CSS22/visudet.html#containing-block-details . I am hoping each element's containing block info is stored somewhere and can be retrieved using some web api.
"Containing block" is an abstract concept, not a concrete one. For this reason, there aren't any APIs in either CSSOM or cssom-view for "retrieving" the containing block of an element, and even if there were, you wouldn't be able to read, change, or render it anyway, so this information isn't going to be of any use to you as an author.
In all likelihood, though, you're asking this not because you're trying to manipulate this information, but because you just want to be able to visualize an element's containing block for debugging purposes. That's why it's something I think all browser developer tools should have: because your use case is served by developer tools, not cssom-view.
Related
I'm tasked with evaluating some legacy web pages (classic asp) for accessibility. You can assume the HTML is not perfectly formed and that it's loaded with inline javascript and that we make use of javascript libraries that vomit HTML to create dynamic features. It's a circus in there.
While I recognize that the obvious answer is to re-write the page(s), that's not an option in our given time tables. So I'm trying to find the best way to make the pages work with a screen reader. Here's what I think I know.
We can use JAWS scripting to instruct the browser how to read the page.
We can use ARIA attributes to give the pages better organization and structure.
Specifically, I'm trying to figure out:
Question 1) If a JAWS script is present, will it be used exclusively by the browser/screen reader and ignore any improvements I make in the underling HTML structure?
Question 2) Could some well-place ARIA attributes give the page enough structure so that the default screen reader properties will work in an acceptable manner (without a JAWS script).
Question 3) I suspect the tough answer is that I would need to do both, which I'm trying to avoid because we barely have the capacity to do just one. But we don't want to lose a customer, of course. :-(
Many thanks for any input.
Instead of explaining only to JAWS how to access your pages, use JavaScript to explain it to any Assistive Technology (AT) for the web. I expect the same effort, while it will profit way more users.
In a JAWS script you would need to describe ways to access DOM nodes that are not accessible. That would include
speaking out information that you have to find elsewhere on the page
adding keyboard navigation where it's missing
Both can be done in JavaScript, probably even easier (you'll need to address DOM elements).
What you will need to avoid is restructuring the DOM and changes to classes, since those are most likely used by the scripts that generate them.
But I'd expect that adding attributes and keyboard handlers will do no harm to the existing scripts. Beware of already existing handlers for focus or keyboard events, though.
I would recommend making a list of attributes and handlers you suspect to conflict with the existing scripts, and searching the scripts for these, like onkeypress or onfocus event handlers.
The absolute best way to make your application/site accessible is to use semantic HTML. It doesn't matter if that HTML is generated by asp or jsp or whatever.
If you have a table, use a <table>.
If you have a heading, use an <h2>.
If you have a list, use a <ul>.
Use <section>, <article>, <nav>, <aside>, <header>, <footer>, etc
That's how you create structure on your page that a screen reader user will appreciate.
If you can't use native HTML, then fall back to ARIA, but treat it like salt. A little bit greatly enhances the flavor but too much spoils the meal.
If you can't use a native <h2>, then make sure you use the appropriate role and attributes.
<div role="heading" aria-level="2">this is my custom h2</div>
If you can't use a native <header>, then make sure you use the appropriate role and attributes.
<div role="banner">my header stuff goes in here</div>
I would recommend totally forgetting about JAWS scripts. It doesn't matter if that's what the customer thinks they should focus us. It's not about that customer. It's about that customer's customers. The end users. They should be able to use whatever screen reader they are used to using and most comfortable with. That's the whole purpose of accessibility - making the site usable and accessible by as many people as possible using whatever assistive technology they are used to using.
Following the Web Content Accessibility Guidelines (WCAG) will lead you to that result.
I am using a Repeater web part in Kentico to pick out pages from the content tree, to generate nicely repeatable snippets of structured HTML, based on an ASCX transformation. (No surprises here - its been working great!).
However, a new requirement landed whereby alongside the existing HTML structure mentioned above, each repeated item must also have an area where we can add any amount of additional content; based on other web parts.
I have previously written a few "layout" type web parts; implementing CMSAbstractLayoutWebPart, as described here, which has allowed me to generate a repeating amount of web part zones, so I feel like I'm half way there. The issue with that though is that as it stands, I don't seem to be able to make use of the great power and flexibility of the transformations on the page type (which I really think I need to do, and seems like it should be possible..).
I thought I may be able to specify the WebPartZone control in the transformation markup directly, like in the following:
<%# Register Src="~/CMSInlineControls/WebPartZone.ascx" TagName="CMSWebPartZone" TagPrefix="cms" %>
<cms:CMSWebPartZone ZoneID="ZoneIDHere" runat="server" />
<div>
<h3><%# Eval("Heading") %></h3>
<p><%# Eval("Summary") %></p>
</div>
But the design view doesn't seem to pick up the web part zone; so I'm assuming the page lifecycle may not allow me to do this as I'd hoped.
So what I would like to know is:
Is it possible to include WebPartZone control in a transformation such that I can then bring in new web parts in Design view?
If not, what is the recommended way to go about this? (If a custom web part is the way to go, I'd like to clone the Repeater web part in the first instance, as many of its existing properties will be needed - but presumably this must still inherit from CMSAbstractLayoutWebPart?
Thanks!
Update
Good point about the editor's experience; I would definitely like to keep this as consistent as possible. The issue for me is that the requirements that drive my data structures are not always fully understood - and are certainly subject to change. Also, they are liable to vary (albeit subtly) across different products. So I've been trying to keep templates and page types more or less the same across the board, and push out the differences into page properties that drive web part config through macros. So given that the transformation approach won't work, I expect a custom web part is the right fit for me.
I shall post my findings!
I think adding a web part zone into transformation is not a right direction as web part zone should be a part of page template (not transformation) in order to utilize it.
I'd probably try to organize my content so each item you currently showing in the repeater has any number of child pages (potentially of a different type) and use something like hierarchical viewer in order to present all of them on the page. It allows using different transformation based on either page type or node level. Another advantage of this approach is that you keep editors experience consistent.
In the end, I was able to use transformation markup to specify the generation of web part zones. I went down the route of creating a custom web part that inherits from CMSAbstractLayoutWebPart, rather than using CMSRepeater web part or similar...
Here's a breakdown of things I needed to do this:
Gave the custom layout-type web part some properties with which to query the content tree, and supply them to a TreeProvider.SelectNodes() method in the web part code once it has initialised (by overriding the OnInit() method)
Gave the web part a TransformationName property so that the raw markup can be retrieved using TransformationInfoProvider.GetTransformation(this.TransformationName)
Used the markup above and resolved macros within it using each node from the node query
Example of macro resolution code (HTML transformations with macros)
protected virtual string ResolveNode(TreeNode node)
{
var resolver = this.ContextResolver.CreateChild();
resolver.AddAnonymousSourceData(node);
return resolver.ResolveMacros(rawTransformationMarkup);
}
Then I go looking for placeholder text in the transformation markup and use the methods available in the CMSAbstractLayoutWebPart parent class(es), as detailed here, to Append() the resolved markup and also call AddZone() as necessary to tap into the response string builder
Summary: The great functionality of the API allowed me to completely avoid the use of any repeater controls. I could generate web part zones as part of the layout web part usual layout generation process.
It would be nice if I could figure out how to resolve the expressions in SCRIPT tags in ASCX transformations to complete the story, but by using HTML transformations I can use the above to accomplish what I need.
I am searching for a solution for crawling % parsing a whole website (online shop) automatically and save all products as Product-name and product-price in a CSV.
Gaining data from a website can be extremely simple or the complete opposite. It depends on how the website is made. A shop tends to be a complex website and thus the DOM (the HTML structure) is mostly unique for that website. It is very unlikely that someone else tried the exact same thing you want for that page. So you have to write code and extract the necessary piecs.
This will be our example product: http://www.thomann.de/gb/focusrite_scarlett_2i2.htm
HTML uses classes to tell the CSS (for styling) how to design or render a certain element. You can use this behaviour for you and find an element containing the price by a class. In this example it is .tr-prod-price.
Every major browser has a Discover element function and it can be used to find a class for a element which appears on screen. Make a right click on your text (price or title) press Q (Firefox only).
Now, you've got closer to parsing your data. Now it is time to write code. You could use Python, Java or even JavaScript to give you some examples. JavaScript in conjunction with Node.JS could be very easy, because JS has the built in methods we need.
You may need a searchengine to find the detail pages of a product. Google can list you all results like site:thomann.de/gb. But of course Google does not provide an easy way (API) to get this information and if you start writing your own parser for that I am not sure about the legal consequences. The legal side needs also to be adressed for you main intention.
So we should make accessible web sites, providing alt attribute for img elements and all other stuff. But although this effects comparatively lesser number of users, I could not find any information to the issues that effects each and every user.
Let me explain. If we were to simplify matters by saying that web sites should provide the most revelant information in the least amount of time, would I be wrong? Given this axion if I were to
1 - Want to download the offline version of Acrobat Reader X. There is nothing, and I mean nothing on the site http://www.adobe.com/products/reader.html which provides a hint, link or anything to that. I have to use google to find ftp://ftp.adobe.com/pub/adobe/reader/
2 - Again trying to find the offline version of Google Chrome at http://www.google.com/chrome/ . Nothing there that may lead to http://www.google.com/chrome/eula.html?standalone=1
3 - So Internet Explorer has an addon called Web Developer Tool Bar. It is safe to assume I will find it at http://www.ieaddons.com/in/. No such luck. Have to google it again and find it at http://www.microsoft.com/downloads/en/details.aspx?FamilyID=95e06cbe-4940-4218-b75d-b8856fced535
4 - Trying to get the the Firebug addon from https://addons.mozilla.org/en-US/firefox/extensions/web-development/. Successfully navigated to web development. You can use "view all recetally added" or "view all top downloads" or "view all top rated". What if you want to view all for web development. Offcouse you sue the search!
These are just some of the situations. I guess my question would be that are these not accessibility issues?
If the issues you are are describing apply equally to say sighted users as to blind users using a screenreader, then no, they are not considered to be accessibility issues, but are perhaps broader usability issues.
If, for example, the adobe web site had no link at all to the offline version, and all users, sighted or not, had to do extra work to find it, that's a usability issue.
But if the web site had a graphic image that sighted users could see was a link to the download, but users using a screenreader did not get this information (eg. because the graphic had no ALT text, or the image was not operable via keyboard), then it's an accessibility issue.
There's certainly overlap between these; and it's often the case that usability issues are harder for disabled users to work around; but generally accessibility refers to cases where the design of a site confronts a user with a disability with additional barriers or challenges beyond those that users without a disability have to deal with.
I think it depends on your definition. Some definitions describe accessibility assuming that the correct website is known and is concerned only with the accessibility of that website. Others do describe the ease of users finding the required resource on the Web, which would encapsulate your issues above.
There are two reasons why accessibility is a failure on the web, and for these failures the technology HTML is to blame for both.
1) HTML is not self-validating. SGML does not have a direct self-validating subset and all versions of HTML < 5 are subsets of SGML. HTML5 is based upon a specification document not vested in any computer language, so its perhaps more lost.
XML does have a direct self-validating subset called schema. There are three widely recognized schema languages for XML: Schematron, Relax NG, W3C XML Schema (official).
By self-validating I mean that the language itself can be called to validate its instances without external assistance from the local parser. Without a self-validating component there is no assurance of integrity of a document's structure, and therefore there is no integrity of accessibility. In a world where web browsers will parse anything without regard for the proper well-formedness of a structure then by practice everything is acceptable completely without regard for accessibility.
2) Less obvious and more devastating is that HTML does not understand its own structure. There are two levels of structure as defined in the HTML specifications: block-level elements and inline elements. According to the specifications the difference between these two structure levels is vested primarily in the visual intention of the elements' presentation, which contradicts other language in the specifications in that HTML is a data structure and not a presentational language.
Furthermore, two levels of structure is insufficient and the actual structural definition of HTML elements exceeds a two level structure anyways without inherently stating such. For example in HTML many block-level elements may contain a 'p' element representing a paragraph, but such an element may not contain other block level elements although many other block level elements may certainly contain block level children.
At a minimum a three level structure is required to describe natural language in a manner consumable to a human audience equally without need for further accessibility assistance. In accordance with the structure defined in Mail Markup Language there would be:
Complex blocks
Simple blocks
Inline elements
Complex blocks are purely structural in that they may contain simple blocks, or in some cases other complex block elements, but will never contain inline elements or text nodes. Simple blocks will never contain complex block or simple block elements, but may contain inline elements or text nodes. Inline elements be either singletons containing nothing or will contain text nodes, but inline elements will never contain other elements.
Such a structure is self-sufficient in properly arranging and structuring content so that accessibility requirements are met immediately in a manner where violations of accessibility requirements are more costly and complex than simple conformance to the given structure. Once a sufficient structure is in place all that is missing is the meta data supplied via descriptive and well-known element names, and in some cases additional extraneous content via attributes.
If either of these two items are missing a minimum baseline for accessibility cannot be assured. When they are both missing, as with the web, then accessibility is likely a lost cause and immediate failure.
Web accessibility
Website is made up of different contents like images, texts, videos, button, etc, with combination of different colors.
Web accessibility means that people with disabilities can use the Web.
Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web.
Web accessibility also benefits others, including older people with changing abilities due to aging.
The main theme of web accessibility is creating a website which is accessible to every one. After designing a website it is essential to check the website ADA compliance, whether it is accessible and how much it is user friendly for disabled people.
What is the usefulness of W3C's Semantic Data Extractor?
http://www.w3.org/2003/12/semantic-extractor.html
This tool, geared by an XSLT
stylesheet, tries to extract some
information from a HTML semantic rich
document. It only uses information
available through a good usage of the
semantics defined in HTML.
The aim is to show that providing a
semantically rich HTML gives much more
value to your code: using a
semantically rich HTML code allows a
better use of CSS, makes your HTML
intelligible to a wider range of user
agents (especially search engines
bots).
As an aside, it can give clues to user
agents developers on some hooks that
could be interesting to add in their
product.
After checking validation for CSS and HTML. Should i go for Semantic Data Extractor tool.
What it does. and how it can improved our coding.? Is anyone using it?
And i check some site randomly with but with most of sites it gives error
Using org.apache.xerces.parsers.SAXParser
Exception net.sf.saxon.trans.XPathException: org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`".
org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`".
Is it possible to get validate every site with this tool?
After checking validation for CSS and HTML. Should i go for Semantic Data Extractor tool.
Probably not
What it does.
Exactly what you quoted from its homepage.
and how it can improved our coding.?
Other then hitting you over the head when you have problems counting heading levels; not a lot.
And i check some site randomly with but with most of sites it gives error
It depends on well formed and sane input.