I want to make some basic accessibility tests in RSpec (obviously, to be further validated by other tools and users later; this is to catch the low-hanging fruit like finding images w/o alt tags and such)
Most of the examples have just checking content is present is similar; what I want to do is get a list of tags, and then make assertions that "all" the tags found meet certain criteria (e.g. all images have to have either an alt or a longdesc; each form input needs either a label or title, etc).
Can RSpec do this, or if not, is there a tool that can?
Thanks.
You can use webrat to test for XPath selectors on your view specs:
describe 'my/view.html.erb' do
it 'should not have images without alt or longdesc attributes' do
render
rendered.should_not have_xpath('//img[not(#alt) and not(#longdesc)]')
end
end
Capybara supports XPath selectors, too.
Related
i want to scrape Linkedins activity posts - comments, number of views and so on.
What selenium method to choose: Xpath or CSS?
I am trying to do this with Xpath but i have hmm the strange feeling that it is changing based on profile, language and chrome version.... How to do this for general usage?
Can anybody advice?
Xpath can change with the execution of javascript or can be different on different profiles. If the only chance is using xpath, then it is ok but if there is an id or special class you should use them.
In selenium, you have multiple options to select an element by id.
driver.find_element_by_id('ember87')
driver.find_element_by_xpath("//*[#id='ember87']")
And of course you can use any other css selector and generally this is the convenient way.
driver.find_element_by_css_selector("#ember87")
driver.find_element_by_css_selector("div#ember87")
Also you can use the parent element to make selection more special and more convenient.
driver.find_element_by_css_selector("#ember72>#ember87")
I am a user, not a programmer, whose forthcoming new website on Plone 4 requires adding hyperlinks inside the Description field of pages and folders. This is needed to point specific words to our website Dictionary as we had been doing on EZ Publish for the last 10 years.
Our developer says this can't be done in Plone. I'm looking to help them find how to do this (they don't seem to use English-language forums).
Is there an existing add-on or existing code for this? If not, is it possible to code this in? How? If not, will it become standard in Plone 5?
<a href="http://python.org>Python</a> will not work, as the description-field is ment and used as a meta-information of an item, holding plain-text only, and doesn't allow the usage of html-elements, nor embedded Javascript. That's probably why T. K. Nguyen recommends to provide an additional rich-text-field.
But you can use reStrucuredText instead. Tell your developer to exchange the description-snippet in concerned templates to:
<div tal:define="Std modules/Products.PythonScripts/standard;
restructured_text nocall: Std/restructured_text;"
tal:content="structure python: restructured_text(context.Description())">
</div>
It will transform any word starting with 'http:' or 'https:' to a link, furthermore will also recognize mail-addresses like 'someone#plone.org' and transform them to mail-links (on click opens the user's default mail-client, if available, with the address pre-populated in the 'To'-field).
If you want to have named links, use the reStrucutredText-syntax for the input, like this:
`Check out Python`_, you'll love it.
`Write a mail`_ to someone.
.. _Python: http://www.python.org
.. _Write a mail: someone#example.org
The tricky part is to figure out, which templates are affected, but it's doable of my experience (did it with preserving line-breaks in listing-views, not reStructuredText).
Alternatively use a JS-workaround, as proposed by T. K. Nguyen. Be aware though, that it may break accessibility to some users.
It is possible to customize the description fields to be rich text (HTML) instead of plain text, but it requires a developer.
You can also use JavaScript to look at a description field and replace (for example) any string that starts with "http" with a hyperlink pointing to that URL. Your developer would have to look for examples of such JavaScript code and then would have to know how to register it on your site and then invoke it.
This describes how to do something similar, for PloneFormGen field help text (which is also plain text):
https://designinterventionsystems.com/blog/how-to-make-urls-clickable-in-ploneformgen-field-help-text
It might be easier to have your developer create a new rich-text description field and have all your content types include that new field. That, however, would require that you update the view templates for those modified content types. This is much easier with Dexterity, which ships with Plone 5 and is available for use with Plone 4.x.
imho it's a really bad idea to convert the description field to any richttext (html, rst, md) field. You need to change a hole bunch of templates to avoid html code rendered everywhere.
Example:
search
collections
content
portlets
Addons
The description is also often used as title attribute on links, in those cases you need to convert it to plain/text. And there are several more issues, where you could ran into.
As #T. Kim Nguyen wrote: Consider add a new textfield and show it, where necessary, probably implemented as a Viewlet in the below title slot.
Looking at your current site, it seems like you want this to provide a teaser for each article, which may contain links. If that is the case, then you can find other ways to do this without making the description html.
For instance, if you used collective.cover for your portal/collection pages then a Rich Text Tile would allow you to cut down the the object text to an appropriate size, but still edit it with a Rich Text editor, and keep/insert hyperlinks.
I am searching for a solution for crawling % parsing a whole website (online shop) automatically and save all products as Product-name and product-price in a CSV.
Gaining data from a website can be extremely simple or the complete opposite. It depends on how the website is made. A shop tends to be a complex website and thus the DOM (the HTML structure) is mostly unique for that website. It is very unlikely that someone else tried the exact same thing you want for that page. So you have to write code and extract the necessary piecs.
This will be our example product: http://www.thomann.de/gb/focusrite_scarlett_2i2.htm
HTML uses classes to tell the CSS (for styling) how to design or render a certain element. You can use this behaviour for you and find an element containing the price by a class. In this example it is .tr-prod-price.
Every major browser has a Discover element function and it can be used to find a class for a element which appears on screen. Make a right click on your text (price or title) press Q (Firefox only).
Now, you've got closer to parsing your data. Now it is time to write code. You could use Python, Java or even JavaScript to give you some examples. JavaScript in conjunction with Node.JS could be very easy, because JS has the built in methods we need.
You may need a searchengine to find the detail pages of a product. Google can list you all results like site:thomann.de/gb. But of course Google does not provide an easy way (API) to get this information and if you start writing your own parser for that I am not sure about the legal consequences. The legal side needs also to be adressed for you main intention.
What is the usefulness of W3C's Semantic Data Extractor?
http://www.w3.org/2003/12/semantic-extractor.html
This tool, geared by an XSLT
stylesheet, tries to extract some
information from a HTML semantic rich
document. It only uses information
available through a good usage of the
semantics defined in HTML.
The aim is to show that providing a
semantically rich HTML gives much more
value to your code: using a
semantically rich HTML code allows a
better use of CSS, makes your HTML
intelligible to a wider range of user
agents (especially search engines
bots).
As an aside, it can give clues to user
agents developers on some hooks that
could be interesting to add in their
product.
After checking validation for CSS and HTML. Should i go for Semantic Data Extractor tool.
What it does. and how it can improved our coding.? Is anyone using it?
And i check some site randomly with but with most of sites it gives error
Using org.apache.xerces.parsers.SAXParser
Exception net.sf.saxon.trans.XPathException: org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`".
org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`".
Is it possible to get validate every site with this tool?
After checking validation for CSS and HTML. Should i go for Semantic Data Extractor tool.
Probably not
What it does.
Exactly what you quoted from its homepage.
and how it can improved our coding.?
Other then hitting you over the head when you have problems counting heading levels; not a lot.
And i check some site randomly with but with most of sites it gives error
It depends on well formed and sane input.
Like I want to check
on Every page <h3> tag must come after <h2> otherwise page should be marked.
like if any page has PDF then Some particular text <p>Download Adobe reader from here</p> should be at bottom of every page is this condition is not matched then page should be marked.
I want to make different type of conditions to check then want to check on whole site and if anything mismatch then report should be generated.
Do you necessarily have to use XHTML? I'd use Python and BeautifulSoup, myself.
(Edit: I was confused - I was thinking of XSLT, not XHTML, and I thought "why would you use XSLT for someting like this?". XHTML is fine, and my recommendation of Python and BeautifulSoup still stands.)
This ruby gem looks like it could be useful to you:
http://code.google.com/p/opticon/
I haven't personally used it, but it claims to basically do what you're asking for.
I've had, and still have, the same need on many of my projects. In my case I'm looking for anything with the class 'error'. This is supported by the TestPlan product in it's verification engine.
In my case, as a quick example, I have several "Web" states and my generic verify script is:
CheckNot //div[#class='error']
Now the way TestPlan works is that every state within "Web" will first run this generic verify script.
If you're interested I could help you come up with the exact syntax needed to do your check.