Measuring and comparing the reliability of the structured walkthrough evaluation method with novices and experts

Christopher Bailey, Elaine Pearson, Voula Gkatzidou

    Research output: Contribution to conferencePaperpeer-review

    1 Citation (Scopus)

    Abstract

    Effective evaluation of websites for accessibility remains problematic. Automated evaluation tools still require a significant manual element. There is also a significant expertise and evaluator effect. The Structured Walkthrough method is the translation of a manual, expert accessibility evaluation process adapted for use by novices. The method is embedded in the Accessibility Evaluation Assistant (AEA), a web accessibility knowledge management tool. Previous trials examined the pedagogical potential of the tool when incorporated into an undergraduate computing curriculum. The results of the evaluations carried out by novices yielded promising, consistent levels of validity and reliability. This paper presents the results of an empirical study that compares the reliability of accessibility evaluations produced by two groups (novices and experts). The main results of this study indicate that overall reliability of expert evaluations was 76% compared to 65% for evaluations produced by novices. The potential of the Structured Walkthrough method as a useful and viable tool for expert evaluators is also examined.

    Original languageEnglish
    DOIs
    Publication statusPublished - 1 Jan 2014
    Event11th International Web for All Conference - Seoul, Korea, Republic of
    Duration: 7 Apr 20149 Apr 2014

    Conference

    Conference11th International Web for All Conference
    Abbreviated titleW4A 2014
    Country/TerritoryKorea, Republic of
    CitySeoul
    Period7/04/149/04/14

    Fingerprint

    Dive into the research topics of 'Measuring and comparing the reliability of the structured walkthrough evaluation method with novices and experts'. Together they form a unique fingerprint.

    Cite this