Cognitive walkthrough
Encyclopedia
The cognitive walkthrough method is a usability inspection method used to identify usability
issues in a piece of software or web site, focusing on how easy it is for new users to accomplish tasks with the system. Whereas cognitive walkthrough is task-specific, heuristic evaluation
takes a holistic view to catch problems not caught by this and other usability inspection methods.
The method is rooted in the notion that users typically prefer to learn a system by using it to accomplish tasks, rather than, for example, studying a manual. The method is prized for its ability to generate results quickly with low cost, especially when compared to usability testing
, as well as the ability to apply the method early in the design phases, before coding has even begun.
that specifies the sequence of steps or actions required by a user to accomplish a task, and the system responses to those actions. The designers and developers of the software then walk through the steps as a group, asking themselves a set of questions at each step. Data is gathered during the walkthrough, and afterwards a report of potential issues is compiled. Finally the software is redesigned to address the issues identified.
The effectiveness of methods such as cognitive walkthroughs is hard to measure in applied settings, as there is very limited opportunity for controlled experiments while developing software. Typically measurements involve comparing the number of usability problems found by applying different methods. However, Gray and Salzman called into question the validity of those studies in their dramatic 1998 paper "Damaged Merchandise", demonstrating how very difficult it is to measure the effectiveness of usability inspection methods. However, the consensus in the usability community is that the cognitive walkthrough method works well in a variety of settings and applications.
By answering the questions for each subtask usability problems will be noticed.
's seminal book on usability, "Usability Inspection Methods." The Wharton, et al. method required asking four questions at each step, along with extensive documentation of the analysis. In 2000 there was a resurgence in interest in the method in response to a CHI paper by Spencer who described modifications to the method to make it effective in a real software development setting. Spencer's streamlined method required asking only two questions at each step, and involved creating less documentation. Spencer's paper followed the example set by Rowley, et al. who described the modifications to the method that they made based on their experience applying the methods in their 1992 CHI paper "The Cognitive Jogthrough".
Usability
Usability is the ease of use and learnability of a human-made object. The object of use can be a software application, website, book, tool, machine, process, or anything a human interacts with. A usability study may be conducted as a primary job function by a usability analyst or as a secondary job...
issues in a piece of software or web site, focusing on how easy it is for new users to accomplish tasks with the system. Whereas cognitive walkthrough is task-specific, heuristic evaluation
Heuristic evaluation
A heuristic evaluation is a discount usability inspection method for computer software that helps to identify usability problems in the user interface design. It specifically involves evaluators examining the interface and judging its compliance with recognized usability principles...
takes a holistic view to catch problems not caught by this and other usability inspection methods.
The method is rooted in the notion that users typically prefer to learn a system by using it to accomplish tasks, rather than, for example, studying a manual. The method is prized for its ability to generate results quickly with low cost, especially when compared to usability testing
Usability testing
Usability testing is a technique used in user-centered interaction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system...
, as well as the ability to apply the method early in the design phases, before coding has even begun.
Introduction
A cognitive walkthrough starts with a task analysisTask analysis
Task analysis is the analysis of how a task is accomplished, including a detailed description of both manual and mental activities, task and element durations, task frequency, task allocation, task complexity, environmental conditions, necessary clothing and equipment, and any other unique factors...
that specifies the sequence of steps or actions required by a user to accomplish a task, and the system responses to those actions. The designers and developers of the software then walk through the steps as a group, asking themselves a set of questions at each step. Data is gathered during the walkthrough, and afterwards a report of potential issues is compiled. Finally the software is redesigned to address the issues identified.
The effectiveness of methods such as cognitive walkthroughs is hard to measure in applied settings, as there is very limited opportunity for controlled experiments while developing software. Typically measurements involve comparing the number of usability problems found by applying different methods. However, Gray and Salzman called into question the validity of those studies in their dramatic 1998 paper "Damaged Merchandise", demonstrating how very difficult it is to measure the effectiveness of usability inspection methods. However, the consensus in the usability community is that the cognitive walkthrough method works well in a variety of settings and applications.
Walking through the tasks
After the task analysis has been made the participants perform the walkthrough by asking themselves a set of questions for each subtask. Typically four questions are asked:- Will the user try to achieve the effect that the subtask has? Does the user understand that this subtask is needed to reach the user's goal?
- Will the user notice that the correct action is available? E.g. is the button visible?
- Will the user understand that the wanted subtask can be achieved by the action? E.g. the right button is visible but the user does not understand the text and will therefore not click on it.
- Does the user get feedback? Will the user know that they have done the right thing after performing the action?
By answering the questions for each subtask usability problems will be noticed.
Common Mistakes
In teaching people to use the walkthrough method, Lewis & Rieman have found that there are two common misunderstandings :- The evaluator doesn't know how to perform the task themself, so they stumble through the interface trying to discover the correct sequence of actions -- and then they evaluate the stumbling process. (The user should identify and perform the optimal action sequence.)
- The walkthrough does not test real users on the system. The walkthrough will often identify many more problems than you would find with a single, unique user in a single test session.
History
The method was developed in the early nineties by Wharton, et al., and reached a large usability audience when it was published as a chapter in Jakob NielsenJakob Nielsen (usability consultant)
Jakob Nielsen is a leading web usability consultant. He holds a Ph.D. in human–computer interaction from the Technical University of Denmark in Copenhagen.-Early life and background:...
's seminal book on usability, "Usability Inspection Methods." The Wharton, et al. method required asking four questions at each step, along with extensive documentation of the analysis. In 2000 there was a resurgence in interest in the method in response to a CHI paper by Spencer who described modifications to the method to make it effective in a real software development setting. Spencer's streamlined method required asking only two questions at each step, and involved creating less documentation. Spencer's paper followed the example set by Rowley, et al. who described the modifications to the method that they made based on their experience applying the methods in their 1992 CHI paper "The Cognitive Jogthrough".
Further reading
- Blackmon, M. H. Polson, P.G. Muneo, K & Lewis, C. (2002) Cognitive Walkthrough for the Web CHI 2002 vol.4 No.1 pp463–470
- Blackmon, M. H. Polson, Kitajima, M. (2003) Repairing Usability Problems Identified by the Cognitive Walkthrough for the Web CHI 2003 pp497–504.
- Dix, A., Finlay, J., Abowd, G., D., & Beale, R. (2004). Human-computer interaction (3rd ed.). Harlow, England: Pearson Education Limited. p321.
- Gabrielli, S. Mirabella, V. Kimani, S. Catarci, T. (2005) Supporting Cognitive Walkthrough with Video Data: A Mobile Learning Evaluation Study MobileHCIMobileHCIThe Conference on Human-Computer Interaction with Mobile Devices and Services series of academic conferences is hosted in cooperation with ACM SIGCHI, the Special Interest Group on Computer-Human Interaction, and ACM SIGMOBILE, the Special Interest Group on mobility of systems, users, data and...
’05 pp77–82. - Goillau, P., Woodward, V., Kelly, C. & Banks, G. (1998) Evaluation of virtual prototypes for air traffic control - the MACAW technique. In, M. Hanson (Ed.) Contemporary Ergonomics 1998.
- Good, N. S. & Krekelberg, A. (2003) Usability and Privacy: a study of KaZaA P2P file-sharing CHI 2003 Vol.5 no.1 pp137–144.
- Gray, W. & Salzman, M. (1998). Damaged merchandise? A review of experiments that compare usability evaluation methods, Human-Computer Interaction vol.13 no.3, 203-61.
- Gray, W.D. & Salzman, M.C. (1998) Repairing Damaged Merchandise: A rejoinder. Human-Computer Interaction vol.13 no.3 pp325–335.
- Hornbaek, K. & Frokjaer, E. (2005) Comparing Usability Problems and Redesign Proposal as Input to Practical Systems Development CHI 2005 391-400.
- Jeffries, R. Miller, J. R. Wharton, C. Uyeda, K. M. (1991) User Interface Evaluation in the Real World: A comparison of Four Techniques Conference on Human Factors in Computing Systems pp 119 – 124
- Lewis, C. Polson, P, Wharton, C. & Rieman, J. (1990) Testing a Walkthrough Methodology for Theory-Based Design of Walk-Up-and-Use Interfaces Chi ’90 Proceedings pp235–242.
- Mahatody, Thomas / Sagar, Mouldi / Kolski, Christophe (2010). State of the Art on the Cognitive Walkthrough Method, Its Variants and Evolutions, International Journal of Human-Computer Interaction, 2, 8 741-785.
- Rowley, David E., and Rhoades, David G (1992). The Cognitive Jogthrough: A Fast-Paced User Interface Evaluation Procedure. Proceedings of CHI '92, 389-395.
- Sears, A. (1998) The Effect of Task Description Detail on Evaluator Performance with Cognitive Walkthroughs CHI 1998 pp259–260.
- Spencer, R. (2000) The Streamlined Cognitive Walkthrough Method, Working Around Social Constraints Encountered in a Software Development Company CHI 2000 vol.2 issue 1 pp353–359.
- Wharton, C. Bradford, J. Jeffries, J. Franzke, M. Applying Cognitive Walkthroughs to more Complex User Interfaces: Experiences, Issues and Recommendations CHI ’92 pp381–388.
See also
- Cognitive dimensionsCognitive dimensionsCognitive dimensions or Cognitive dimensions of notations are design principles for notations, user interfaces and programming language design, described by researchers Thomas R.G. Green and Marian Petre...
, a framework for identifying and evaluating elements that affect the usability of an interface. - Comparison of usability evaluation methodsComparison of usability evaluation methodsSource: Genise, Pauline. “Usability Evaluation: Methods and Techniques: Version 2.0” August 28, 2002. University of Texas.-See also:* Usability inspection* *Partial concurrent thinking aloud...