There are a lot of people out there who want automated accessibility testing. I see the question asked all the time on Twitter, especially by those that may be new to accessibility. Often they may not know exactly what to focus on to fix their issues, and just want a report, or linting errors, to tell them what to fix and why.

Fortunately for them, there are quite a few tools out there than can be run as part of development build processes, as one-off instances via browser extensions, and there are even services that will run continuous testing against your website or application.

These testing tools are really great for helping people find flagrant accessibility issues. However, just because a warning pops up, or your “accessibility score” doesn’t reach a perfect 100%, you should still verify that these warnings are actually issues to solve for.

Consider my test page (opens in a new window).

The markup of the test page
<body>
  <div id="app">
    <header>
      <img src="logo.svg" role="img" alt="My Company!">
      <nav>
        <a href="/about">About</a>
        <a href="/stuff">Stuff</a>
        <a href="/contact">Contact</a>
      </nav>
    </header>
    <main>
      <h1>Primary topic!</h1>
      <p>Some text goes here...</p>
      <p>Oh, figures I'd need a figure:</p>
      <figure role="figure" aria-label="repeat figcaption content here">
        <img src="#" alt="I'm broken!">
        <figcaption>
          <p>Caption for the figure.</p>
        </figcaption>
      </figure>
      <p>Another figure that's made out of a div:</p>
      <div role="figure" aria-label="repeat figcaption content here">
        <img src="#" alt="I'm broken!">
        <p>
          Caption for the figure.
        </p>
      </div>
    </main>
    <footer role="contentinfo">
      &copy; 2019 or something.
    </footer>
  </div>
</body>

It should be noted that there are zero actual accessibility issues with this reduced test case. Even the doubling up of ARIA with their native elements is necessary to rectify some present gaps (see How do you figure?, Landmark accessibility, and this <img src="file.svg" bug).

With that said, I ran my test page against four different automated testing tools. I received the following results (vocabulary for how each tool calls out violations, best practices, warnings, and calls for manual review have been simplified to just “issues”):

  • 0 issues.
  • 3 issues.
  • 9 issues.
  • 10 to 54 issues.

Many of the “issues” that these tools reported are nothing more than best practice checks, or call outs for manual review. For each “issue” there was often an accompaniment of good advise. However, for those that may be unfamiliar with the ins-and-outs of what these best practices mean, it can cause a bit of sticker shock to see large issue and review numbers when everything is actually coded properly.

There were some tools that reported actual “errors” for my test page. These issues were largely due to the use of role="figure" which was erroneously being reported as an “invalid ARIA role”, which is false.

Moral of the story

Automated tests are just one tool we can use to help ensure we’re building accessibly compliant websites and applications. However, it’s important to understand the results you’re receiving, and to be able to verify if an issue, or best practice, is truly an issue to act on.

For instance, warnings to check text against background images is common place with many automated checkers. It can be incredibly difficult to automatically determine color contrast in such situations, so a warning will be flagged for a manual check. It is a good thing to be reminded to verify such instances. While it may seem daunting to be met with a large number of issues to “review”, it’s far better than receiving no guidance at all.

Sometimes you may come across gaps in the assertions these tools are based on. Or a best practice may dictate that you don’t double up ARIA roles on their native elements, when that’s absolutely what you need to do to ensure cross-browser and assistive technology feature parity.

So please do use these tools, they can be incredibly helpful. It just behooves us to ensure that we’re not chasing down “100% accessibility scores” when it’s usable and inclusive user experiences we should really be focused on.