Intern Accessibility Testing

intern - done reading - Web Content Accessibility Guidelines (WCAG) - Section 508 Standards

// Intern - Accessibility Testing Using Intern-a11y:

Intern-a11y is an Intern plugin.  To install it along with intern:

npm install intern intern-a11y --save-dev

To use intern-a11y:

], function (registerSuite, a11y) {
    var tenon =;

        name: 'accessibility tests',

        'check page': function () {
            return tenon.check({
                source: ''

The above code will perform an accessibility check behind the scene using, a cloud-based testing service.  Intern-a11y also support aXe ( ), an injectible client-side library.  
Both of these scanners can check entire pages or portions of a page or app, 
and can be configured to use the rule sets necessary for the evaluation. 

If the check reports any failures, the call to tenon.check will fail and throw 
an error. 

Tests using the axe module may be authored in a similar fashion, although the 
axe module works in a fundamentally different way. While the tenon module makes 
HTTP calls to an external service, the axe module uses Intern (specifically, 
the WebDriver call provided by Leadfoot) to load a test page and inject the aXe 
library into it. This means that axe can only be used in functional tests 
executed using intern-runner . 

The intern-a11y package also includes a custom reporter, A11yReporter , that 
will save complete accessibility reports to a file or directory. When Intern 
runs a check using aXe or, a lot of data is typically returned. Intern 
only needs to see if there are errors to pass or fail a check. However, it is 
useful to see more information about what failed. A11yReporter creates HTML 
reports in a standardized format that lists test failures returned by the 
scanner, including information about specific rule violations. 

To use the reporter, just add a descriptor to the reporters list in your intern 

reporters: [
        id: 'dojo/node!intern-a11y/src/A11yReporter',
        filename: 'a11y-report.html'

After the test run is finished, results from all the accessibility tests will 
be aggregated into the filename specified in the reporter descriptor ( 
a11y-report.html by default). It is also possible to provide a directory for 
filename , in which case the results from each failing test will be written to 
a separate file in the target directory.

If there were any failing tests, you’ll end up with a report. The report will 
contain a section for each failing test, and each test section will contain a 
list of cards describing specific rule failures.

Web accessibility testing generally works by having a scanner check a web page 
or web page fragment for rule violations. The most commonly used rules are 
defined in the World Wide Web Consortium (W3C) Web Content Accessibility 
Guidelines (WCAG) and the General Services Administration (GSA)  Section 508 
Standards . Scanners can check for violations at different rule levels without 
a set and can typically be configured to only check a subset of rules. 

Regardless of the scanner being used, automated accessibility checks are only a 
part of the solution. A human is still required to fully verify whether a site 
is accessible. For example, an automated tool can check that images have alt 
tags and that form elements have visible labels, but it can’t verify whether 
the information in those tags and elements is meaningful or accurate. Automated 
scanners can still save time by quickly detecting a large number of common 
design and layout issues, helping a manual reviewer focus on problem areas. 

Intern-a11y doesn’t reinvent the wheel and performs accessibility checks itself 
as there are several great tools that already exist. Rather, intern-a11y 
provides an interface to external accessibility scanners. Two such scanners 
are supported in the initial release: , a cloud-based testing service, 
and aXe , an injectible client-side library. Both of these scanners can check 
entire pages or portions of a page or app, configured to use the rule sets 
necessary for the evaluation.
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License