A bit of history, and discussion of the upcoming live-checking capabilities.
2024 A11yTalks

A bit of history, and discussion of the upcoming live-checking capabilities.
An in-depth look at how inline checkers like Sa11y and Editoria11y work differently from developer-focused manual tools and crawler-based dashboards.
A developer’s tour of the JavaScript, Web Components, JSON API and Drupal features that make up the Editoria11y checker.
Two years ago we introduced the Editoria11y accessibility checker as a turnkey site plugin for Drupal that would offer authors automatic instant feedback, with helpful tips for improving their content.
We thought this would meet an unmet need. Existing options all required training and diligent use to be useful, and experience had showed us that trying to train and monitor thousands of content authors over a large organization was not an efficient way to tackle the problem. For a program to succeed, we were confident it needed to have a high level of automation, providing polite “just in time training” to content authors when it detected common mistakes.
It worked.
At Princeton, it largely reversed the direction of my communications. On platforms with it installed, we went from sending repetitive emails to content authors requesting alt text, link or heading improvements, to receiving questions from content authors about how to improve their writing to avoid it getting flagged so much.
Externally, the reaction was elation:
We took a “tease and link” approach with the v1 tips: a short explanation of the issue, and a link for more details.
This was an improvement on tools written for developers: the tips were short and clear. But the decision of where to link was always an issue. Several government and higher education users felt the need to modify the tips to remove links to Princeton’s documentation, which made for extra work for them. So we rewrote the default tips to self-contained, with inline examples.
The distinction between red alerts (definite issues) and yellow warnings (possible issues) proved confusing, and at times annoying.
So we converted yellow warnings to “manual checks” to clarify their meaning, and added buttons to let authors dismiss alerts. “Ignored” alerts are only hidden for the person who clicks it; “marked OK” alerts are hidden for all site editors. Site administrators can choose who has permission to do each.
A common lament by site owners was that the tool was doing a great job of finding issues all over the site, but there was no way to view a site-wide report.
The rewrite adds an option to sync findings back to a site’s database. Site owners can now browse findings by page or issue, and review which issues have been ignored or marked as ok – and restore the issue if they disagree!
V2 was nearly a full rewrite. Some of the new features include:
As with the year after v1 was released, now it is time to sit back and let authors test and provide feedback. Please do explore the checker library demo or the Drupal integration demo, and send in any bug reports and feature requests.
Creating highly accessible Web content is complicated, and tends to start with a lot of training.
Familiar practices must be discouraged:
New practices must be encouraged:
Certainly some of our trainings are rolled out because we are teaching new concepts, but at some point the question needs to be asked: how much of the need for new skills is coming from the concepts being new, and how much is coming from us rolling out new expectations without updating our tools? Just because something is new does not mean it cannot be intuitive.
From the dawn of recorded history until the early 1980s, we blamed the author when they misspelled a word. The responsibility was on the author to learn to spell well.
How did someone improve their spelling? This is going to sound familiar:
But something changed in the 1980s: let’s call it “spelling phase 2.” The most popular word-processing programs implemented spell check.
And what happened to the blame game? Almost overnight, it shifted the blame to the tool:
Did spell check fix all of our spelling? Certainly not. But it helped. And it shifted much of the burden and responsibility from the author to the machine.
This was not the end, though…
Recent years have brought another step forward: computers have started spelling for us:
And who gets blamed when this falls apart? The tool!
Social media aggregators love the embarrassing “That’s not what I meant!” screenshot.
In our trainings, we talk a lot about testing, and introduce several quite good testing tools. None of these can catch everything: concepts like “avoiding explaining concepts through chromatic or spatial references” need to be checked for by a human; tools typically catch just under half of the actual issues a site might have. But the tools do a very good job at catching many common mistakes in editorial content.
Three problems, though:
So Princeton set out make something new: a website-integrated, automatic checker that would only look at common editorial mistakes.
Automatic is…tough.
So we started with the closest tool we could find to these goals: Sa11y, out of Ryerson University. We took its test architecture and user-friendly information-panel-with-tooltips approach, and spent half a year adapting the tool so that tests always ran automatically, optimizing its performance, tweaking the tooltips and creating a long list of configuration options a developer could use to quickly adapt it to any platform.
Here is how it looks today:
When an author is logged into their site, Editoria11y’s toggle indicates the number and type of issues (pass, manual checks needed, likely issues found) on each page. They can click to reveal inline highlighting and tips. If the error is new, because they are looking at a page they just edited, the panel pops open automatically.
As of March 2021, this means more than 400 Princeton websites are automatically checking for:
We have released the JavaScript library to the community, created a turnkey Drupal integration, and started work on a turnkey WordPress integration.
In many ways, Editoria11y is a stopgap solution. Most of the mistakes it catches were made because we placed new expectations on content authors (“tag your content with good structure”), but gave them tools that encouraged the opposite.
The path forward is to make accessible content creation easier, and as automatic as possible. What that looks like is not yet obvious, but probably includes things like:
Phase | Spelling | Content Accessibility |
---|---|---|
1 | Dictionaries and manual checks | Guidelines and manual tools |
2 | Automatic spell check | Editoria11y and future automatic accessibility checkers |
3 | Predictive spelling | Predictive structuring |