Wednesday, October 20, 2010

BIB_09: Donker-Kuijer, M. W., De Jong, M., & Lentz, L. (2008). Heuristic web site evaluation: exploring the effects of guidelines on experts' detection of usability problems. Technical Communication, 55(4), 392-404.

In this article, Donker-Kuijer et al. interrogate the assumed benefits of using heuristics in web evaluation. They conducted a study using observation and think aloud protocol to compare usability experts' unguided web evaluation practices and those using heuristics. The researchers are interested in answering three research questions: (1) What does the unguided expert evaluation tell us about the validity of the heuristics? (2) Are there any differences between heuristic and unguided expert evaluation regarding the number and types of annotations made? and (3) Do high-level heuristics (experts are given a limited number of more or less general guidelines that are formulated as design aims rather than as specific design specifications) and low-level heuristics (experts are given a large set of detailed guidelines that are formulated as design specifications rather than as design aims) have different effects on the annotations made by experts? The results led the authors to conclude that heuristics are useful but using them in practice is time-consuming, they have different focuses and thus appropriate choice of heuristics is important in practice, there's no difference between the results using high- and low-level heuristics,  and experts' experience is an important factor in their evaluation.

This piece is an empirical study of how effective and useful heuristics are for experts in web usability. However, I'm not sure if an empirical study is really needed to understand how useful heuristics are to experts considering the cost of such studies. I think interviews or self-reflection of experts will probably do the job (and perhaps more effectively and accurately). The design of the study is problematic. The experts were asked to evaluation a website without heuristics, i.e., depending on their own experience and knowledge, for 25 minutes, and then given a heuristic according to which they reevaluate the website for another 25 minutes. To me, this is very poor research design. The same experts evaluated the website twice, one time unguided and one time with a heuristic. The researchers apparently did not consider the effect of the experts' learning that may affect the results. Also, it is unclear why the heuristic they chose is representative for all heuristics. In any case, I think this article is a good example of researchers trying hard to produce empirical research without a good rationale of its significance and rigor, but just because it is more valued in the field. The lesson learned: not all empirical studies are rigorous or appropriate for all research questions.

No comments:

Post a Comment