Human-Computer Interaction. 2011;26(3):246-283.
Web accessibility means that disabled people can effectively perceive, understand,
navigate, and interact with the web. Web accessibility evaluation methods are needed
to validate the accessibility of web pages. However, the role of subjectivity and
of expertise in such methods is unknown and has not previously been studied. This
article investigates the effect of expertise in web accessibility evaluation methods
by conducting a Barrier Walkthrough (BW) study with 19 expert and 57 nonexpert judges.
The BW method is an evaluation method that can be used to manually assess the accessibility
of web pages for different user groups such as motor impaired, low vision, blind,
and mobile users.Our results show that expertise matters, and even though the effect
of expertise varies depending on the metric used to measure quality, the level of
expertise is an important factor in the quality of accessibility evaluation of web
pages. In brief, when pages are evaluated with nonexperts, we observe a drop in validity
and reliability. We also observe a negative monotonic relationship between number
of judges and reproducibility: more evaluators mean more diverse outputs. After five
experts, reproducibility stabilizes, but this is not the case with nonexperts. The
ability to detect all the problems increases with the number of judges: With 3 experts
all problems can be found, but for such a level 14 nonexperts are needed. Even though
our data show that experts rated pages differently, the difference is quite small.
Finally, compared to nonexperts, experts spent much less time and the variability
among them is smaller, they were significantly more confident, and they rated themselves
as being more productive. The article discusses practical implications regarding how
BW results should be interpreted, how to recruit evaluators, and what happens when
more than one evaluator is hired.