Abstract
Usability studies are commonly used in industry and applied in research as a yardstick for other usability evaluation methods. Though usability studies have been studied extensively, one potential threat to their reliability has been left virtually untouched: the evaluator effect. In this study, four evaluators individually analyzed four videotaped usability test sessions. Only 20% of the 93 detected problems were detected by all evaluators, and 46% were detected by only a single evaluator. From the total set of 93 problems the evaluators individually selected the ten problems they considered most severe. None of the selected severe problems appeared on all four evaluators’ top-10 lists, and 4 of the 11 problems that were considered severe by more than one evaluator were only detected by one or two evaluators. Thus, both detection of usability problems and selection of the most severe problems are subject to considerable individual variability.
Original language | English |
---|---|
Title of host publication | Proceedings of the HFES 42nd Annual Meeting |
Number of pages | 5 |
Place of Publication | Santa Monica, CA |
Publisher | Human Factors and Ergonomics Society |
Publication date | 1998 |
Pages | 1336-1340 |
Publication status | Published - 1998 |