Evaluation Methods

Evaluation methods assess the product's usability, which include the dimensions of usefulness, learnability, efficiency and user satisfaction.

Authors classify evaluation methods differently. This table presents four classification schemes, aligning approximately equivalent terms.

Rosson and Carroll Lewis and Rieman Nielsen and Mack Preece, Rogers and Sharp
Analytical methods Evaluating without Users Formal methods Predictive / Modeling user's task performance
Informal methods Predictive / Asking experts
Empirical methods Evaluating with Users Empirical methods Usability testing
Field studies
    Automatic methods  

Preece, Rogers and Sharpe (PRS) also describe a "Quick and Dirty" evaluation paradigm, which seems to generally refer to informal empirical methods.

Here I follow the organization set by Rosson and Carroll and add references to our text.

Analytic methods

Usually these methods are conducted by HCI specialists and do not involve human participants performing the tasks. The method often relying on the specialists' judgment. Not only do the method identify potential usability problems, they provide an understanding of the problem.

Common methods include:

Nielsen and Mack divide these methods into two categories: informal methods (e.g. heuristic evaluation and cognitive walkthrough) and formal methods (e.g. the keystroke-level model).

Empirical methods

Empirical methods involve data collection of human usage. There are direct methods (recording actual usage) and indirect methods (recording accounts of usage).

Direct methods (Observing users)

Indirect methods (Asking users)

In addition to the Needs Analysis phase, these methods can be conducted at the end of a usability test to gain the user's opinions on the product's usability, which can include usefulness and user satisfaction.

Automatic methods

A link checker is one example. There will be more of these in the future.


Last modified: Wed Oct 20 22:27:09 Central Daylight Time 2004