I'm updating a site to be WCAG 2.0 AA compliant and wanted to know if the requirements are inherited as the levels go up.
For example:
Does Level AA mean you must satisfy Level AA and Level A?
Does Level AAA mean you must satisfy Level AAA and Level AA and Level
A?
I'm fairly certain it does, just wanted to be extra clear before I commence.
In a nutshell - yep, it does. No level cheats or skips available.
WCAG 2.0
Here's the quote from current version:
Conformance Level: One of the following levels of conformance is met
in full.
Level A: For Level A conformance (the minimum level of conformance), the Web page satisfies all the Level A Success
Criteria, or a conforming alternate version is provided.
Level AA: For Level AA conformance, the Web page satisfies all the Level A and Level AA Success Criteria, or a Level AA conforming
alternate version is provided.
Level AAA: For Level AAA conformance, the Web page satisfies all the Level A, Level AA and Level AAA Success Criteria, or a Level
AAA conforming alternate version is provided.
WCAG 2.1 - it's new, it's fresh, it's shiny
The problem with WCAG 2.0 is it's almost 10 years old since the Recommendation was published. Technology, including techniques and discoveries, have moved on a lot since 2008 so it doesn't capture everything.
One example - mobile devices such as tablets, smartphones, smart glasses, smart watches, smart tables, handheld game devices, video game consoles, and so on have all come about and change how we interact with web content.
Don't forget to include WCAG 2.1 - the latest WCAG standard - which addresses some of these before WCAG 3.0 is out in some form next year.
ARIA, ARIA - where art thou ARIA?
Don't forget about ARIA which is just as important especially for any SPA functionality or if you use frameworks like Angular or React.
Hope this helps.
Related
When building a project with the Form Recognizer v. 2.0 API, I get an page limit error when trying to train with more than 500 pages in the project, even though the service has been configured to use tier S0.
Are there limitations on this level or is it because the service is in preview?
I think I find the right documentation for this point, but I agree, it was not easy to find it.
It's here: https://learn.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview#input-requirements
It states that, for a custom model:
PDF and TIFF documents must be 200 pages or less, and the total size
of the training data set must be 500 pages or less.
So I guess yes this is a limitation, but I don't know if it is due to the preview or not.
I have built an Azure Custom Vision model using ~ 5000 of my own domain-specific images and a set of ~ 30 hierarchical and non-hierarchical labels.
I am not sure how best to organize my label zoo in this particular multi-label classification problem. The best approach (see e.g. https://www.researchgate.net/publication/225379571_A_Tutorial_on_Multi-label_Classification_Techniques and https://towardsdatascience.com/journey-to-the-center-of-multi-label-classification-384c40229bff) must depend on the inner workings of Custom Vision, alas undocumented*. Consider for example
Image Document_Description
1 Barclays Bank Statement
2 HSBC Bank Statement
3 Joe Bloggs' Curriculum Vitae
Given the (perhaps) unknown modelling scheme(s) used by Custom Vision, and its support for arbitrary tags, which labelling taxonomy will be most efficient (in terms of training compute and model performance)?
1. Hierarchical (choose one from each level):
IsCV | IsBankStatement | IsOther | ...
|
Barclays | HSBC | ...
2. Non-hierarchical:
IsCV, IsBankStatementBarclays, IsBankStatementHSBC, IsOther, ...
3. Both
4. Some other scheme perhaps informed by insider information?
Bonus: How would you use the available performance indicators - or the V3.0 API - to measure the performance of two competing taxonomies (with minimal training compute/cost)?
*I apologise for the desperate question. Before voting to close it, please allow Azure Cognitive Services time to comment, since this seems to be about the only forum in which they might be able to give input, and they do ask for queries via SO. Thanks.
I think custom vision only support non-hierarchical tags for now, but you can submit suggestions here https://cognitive.uservoice.com/forums/598141-custom-vision-service
I am designing an SCORMv1.2 based Elearning solution for a client who manage their existing training courses via SABA LMS. I am considering providing each section of the course as a separate SCO with its own score tracking.
I am wondering whether it is usually a function of the LMS to aggregate tracking scores across distinct SCO's for a user, or whether I should be creating multi-package SCO which aggregates scores for each of its child SCO's.
I'd say there's a mix of some LMSs that will do some aggregation with scores and some that don't.
It's common practice for content vendors to deliver a complete block of content as a single package and often as a single SCO so that they can control the look and feel of the navigation. This means they often just present an aggregated score to the LMS.
It is possible to do more complex things with SCORM 2004 including multi-sco packages that include their own navigation menus, but this is not commonly done.
Here's some statistics on what features of SCORM are commonly used.
I would say, "Yes". The vendor who publishes the content need to configure the each section of the course as a separate SCO so that they can be tracked separately in the LMS.
A SCORM 1.2 course can be delivered as multi SCO by using the parameters attribute of the item element in the manifest file. This can also be acheived by using different start page for each SCO rather than one start page that is common for the entire course.
I'm new to SCORM itself and I have a problem with tracking progress via Moodle's LMS API
SCORM version is 1.2
I have structure like this:
Lesson1
Module1.1
Module1.2
...
Lesson2
Module 2.1
etc
Each lesson has a set of modules of 2 types:
HTML Modules - modules that are just viewed by users
Game Modules - some games that have a medal (none, bronez, silver, gold) - as a result of module completness
The progress tracking problem is following:
I need to track progress on different Lessons based on a progress of their children Modules (sequencing?).
After all: I need to add a START to a lesson after all Game modules of the lesson are finished. Star indicates some sort of Progress on a lesson level
What I'm trying to do is to store Module's progress data (medals) in cmi.suspended_data variable as a string:
"module1.1,gold|module1.2|silver ..."
After that I want to process that thing each time page is loaded and figure out if I gain a STAR to one of lessons. For example: when I've finished last game in lesson1 with a medal so that all games are now have medals - and after that I move to lesson 2 - I should add star to lesson 1...
The problem is that moving from module to module and from lesson to module etc - RESETS suspended_data variable.
Question1: Does suspended data link to a SCO object? (which means each module/lesson has it's own suspended_data var)
Question2: What is CORRECT approach in this situation to trach sequencing progress (As I've seen, scorm 2004 has some sequencing mechanisms that can be described in Manifest. Which is correct approach in 1.2 version)
Question 1: the cmi.suspend_data is unique to each SCO, and can only be read/set from within the SCO. In your case, SCO2 cannot read SCO1's suspend_data and vice-versa.
Question 2: you'd better stick with a single SCO aproach here. All your modules and lesson will be part of a single SCO, which means you will be able to track the medals and user progress w/o any problem.
Many banners are tied to a zone. All of these banners have different targeting requirements using the site:variable (I say "requirements" loosely as the banner can be displayed even when requirements are not matched). The reason for this is because all banners must ultimately have an even number of impressions; however, along the way, the system should use the best of targeting when possible.
An example of the desired logic is below:
Given -
Banner 1 Targeting: IncomeGreaterThan20k=1, FishingIndustry=1
Banner 2 Targeting: IncomeLessThan20k=1, FishingIndustry=1
Visitor Profile: IncomeGreaterThan20k=1, FishingIndustry=1
Case 1 -
Banner 1 Impressions = 999
Banner 2 Impressions = 1000
Zone Rendered to Visitor 1 - Banner 1 is displayed
Why?: Targeting of Banner 1 is better than targeting of other ads (more matches on site:variables), best targeted banner has impressions less than or equal to other banners = true, show Banner 1.
Case 2 -
Banner 1 Impressions = 1000
Banner 2 Impressions = 1000
Zone Rendered to Visitor 1 - Banner 1 is displayed
Why?: Targeting of Banner 1 is better than targeting of other ads (more matches on site:variables), best targeted banner has impressions less than or equal to other banners = true, show Banner 1.
Case 3 -
Banner 1 Impressions = 1001
Banner 2 Impressions = 1000
Zone Rendered to Visitor 1 - Banner 2 is displayed
Why?: Targeting of Banner 1 is better than targeting of other ads (more matches on site:variables), best targeted banner has impressions less than or equal to other banners = false, show Banner 2.
When there are more than 2 banners, the logic should be extended based on the number of targeted variables matched and the number of impressions.
How can you configure the banner targeting to accomplish this?
If this can be accomplished, is there a way to put importance weights on the various site:variables?
If this can be accomplished, can you adjust the threshold for the number of impressions difference that can occur between the ads? Rule: No ad should be rendered more than 10 time more than any other ad.
The number of targeting fields matching does not affect ad selection.
If 4 banners in a zone end up with their targeting as 'true' (as in, all targeting criteria are met) then they are all considered for delivery.
After that, if all 4 are remnant banners from different campaigns, the only thing which adjusts the ad selection is the campaign weight. If they're all equal weighting, they all have equal chance of selection. If campaign1 has double the weight of campaign 2,3, and 4, then it has double the chance of the other campaigns of being selected.
To do exactly what you wish would require a plugin which alters the ad selection process.
1) Set all campaign weights equal (lets say weight=10), and all campaigns as remnant
2) Once all banners with targeting=false are thrown away, analyze the remaining banners and give more weight to ones with more targeting criteria
3) During hourly maintenance, analyze the stats and give a higher weight to ones which are falling behind. You don't want to do this during delivery because querying stats during delivery will cause a lot of overhead to the delivery process, which should be as quick as possible without DB calls
Using weights does not guarantee equal impressions - if they have a 50/50 chance of delivering there is a chance bannerA will delivery 1005 and bannerB will delivery 995, etc. It generally works out well - but since you are altering weights depending on targeting you are going against the 'deliver evenly' idea and perhaps pausing an ad which has gone above the 10x is a better idea, and then re-activating once it is within 5x (or such)
Note - unfortunately, making plugins for OpenX isn't very easy unless you have someone who already knows their way around. Its not a matter of knowing PHP, its a matter of knowing the OpenX plugin architecture.