In April 2016 Manchester eScholar was replaced by the University of Manchester’s new Research Information Management System, Pure. In the autumn the University’s research outputs will be available to search and browse via a new Research Portal. Until then the University’s full publication record can be accessed via a temporary portal and the old eScholar content is available to search and browse via this archive.

The Expertise Effect on Web Accessibility Evaluation Methods

Brajnik, Giorgio; Yesilada, Yeliz; Harper, Simon

Human-Computer Interaction. 2011;26(3):246-283.

Access to files

Full-text and supplementary files are not available from Manchester eScholar. Full-text is available externally using the following links:

Full-text held externally

Abstract

Web accessibility means that disabled people can effectively perceive, understand, navigate, and interact with the web. Web accessibility evaluation methods are needed to validate the accessibility of web pages. However, the role of subjectivity and of expertise in such methods is unknown and has not previously been studied. This article investigates the effect of expertise in web accessibility evaluation methods by conducting a Barrier Walkthrough (BW) study with 19 expert and 57 nonexpert judges. The BW method is an evaluation method that can be used to manually assess the accessibility of web pages for different user groups such as motor impaired, low vision, blind, and mobile users.Our results show that expertise matters, and even though the effect of expertise varies depending on the metric used to measure quality, the level of expertise is an important factor in the quality of accessibility evaluation of web pages. In brief, when pages are evaluated with nonexperts, we observe a drop in validity and reliability. We also observe a negative monotonic relationship between number of judges and reproducibility: more evaluators mean more diverse outputs. After five experts, reproducibility stabilizes, but this is not the case with nonexperts. The ability to detect all the problems increases with the number of judges: With 3 experts all problems can be found, but for such a level 14 nonexperts are needed. Even though our data show that experts rated pages differently, the difference is quite small. Finally, compared to nonexperts, experts spent much less time and the variability among them is smaller, they were significantly more confident, and they rated themselves as being more productive. The article discusses practical implications regarding how BW results should be interpreted, how to recruit evaluators, and what happens when more than one evaluator is hired.

Bibliographic metadata

Content type:
Published date:
ISSN:
Publisher:
Publishers website:
http://www.tandfonline.com/doi/abs/10.1080/07370024.2011.601670
Volume:
26
Issue:
3
Start page:
246
End page:
283
Total:
38
Pagination:
246-283
Digital Object Identifier:
10.1080/07370024.2011.601670
Related website(s):
  • Related website http://www.simonharper.info/publications/Harper2011kl.pdf
Access state:
Active

Institutional metadata

University researcher(s):

Record metadata

Manchester eScholar ID:
uk-ac-man-scw:147745
Created by:
Harper, Simon
Created:
11th January, 2012, 17:41:18
Last modified by:
Harper, Simon
Last modified:
26th October, 2015, 16:30:31

Can we help?

The library chat service will be available from 11am-3pm Monday to Friday (excluding Bank Holidays). You can also email your enquiry to us.