While WCAG has been formulated without reference to specific assistive technologies, one of the most widely used are screen readers.
Screen readers are applications which allow people – usually people who are blind or partially sighted, but also others such as people with dyslexia – to use computers, including operating systems, word processors, integrated development environments, music players – and of course browsers.
- Communicate all content by voice, and/or by braille display.
- Enable users to navigate a site, and explore a webpage’s content and current states without needing to use a pointing device or view a screen.
- Alert users to changes in state and content of web pages.
- Enable users to read and interact with a web page’s links, forms, widgets and other focusable controls using only a keyboard.
Who uses them?
Screen readers are used by,
- Blind people, and partially sighted people without enough useful sight to see and operate a webpage.
- As an aid for partially sighted people who have some useful vision, but who might find it inconvenient or exhausting to rely only on sight alone.
- People with sensitive eyes who find looking at a screen for prolonged periods painful.
- Dyslexic people who might have good vision, but have difficulty in reading text.
How do they work?
We’ll concentrate on using screen readers and browsers, although of course they are used to interact with most software using similar methods.
A screen reader application acts as an intermediate layer between the browser and user. Hooking into the accessibility API of a browser, they build their own Accessible DOM from the browser’s DOM of the webpage. When someone operates a screen reader, they are interrogating and navigating this Accessible DOM, rather than navigating the browser’s DOM directly. The screen reader manages its own virtual cursor which is independent of the one seen in a browser – be aware that the virtual cursor position may not match the browser’s cursor visible onscreen! Of course, for the screen reader user it appears they are directly interacting with the webpage.
Navigation and interaction is entirely keyboard based, using tab and shift-tab, arrow keys, and other special command keys. There is no reliance whatsoever on pointing devices, and neither is it necessary to see the screen, or even have one plugged in.
When users interact with the Accessible DOM – for example, click links, uses widgets such as drop down menus – the screen reader forwards on those commands to the browser. They also allow users to fill in forms, often using some form of ‘forms mode’, which, for instance, treats characters as input for text boxes rather than commands to move around a page.
If the content of a webpage (i.e., the browser’s DOM) updates or changes, whether caused by the user or not, the browser’s accessibility API triggers change events. The screen reader can detect these, and it updates its own Accessible DOM accordingly.
Sighted users will probably be able to see the changes, but screen readers will not know the changes have happened, unless they happen to navigate to them later on. Sometimes this is fine, but if it is important for screen reader users to be aware of the changes (e.g., an error message), the webpage can be marked up with ARIA (Accessible Rich Internet Applications) live regions. Then when content changes inside these regions, screen readers alert users of those changes as they happen, with varying degrees of specified urgency.
Its important to understand that users, mediated by the screen reader, access the DOM of the webpages directly, and pay no attention to the visual representation of the webpage the browser builds using CSS.
The next post will describe how screenreader users interact with web pages…