Cool stuff and UX resources

< Back to newsletters

Introduction

I received many emails and had many discussions after last month's article where I suggested that heuristic evaluations have limited usefulness in the design of systems.

Here is my main problem with most of the heuristic evaluations that I have seen over the past several years. There are some influential people in the user experience community who have convinced many others in the community, and most people outside the community (managers, system designers, programmers, etc.) that heuristic evaluations (and other low-level evaluation techniques) are good enough on their own for achieving acceptable levels of human performance. I do not believe that this is true!

Jakob Nielsen and (very few) others studied heuristic evaluations for a few years in the early 1990s. Since then, he has summarized many of his findings in his books. He always assumed that heuristic evaluations were a valid way to identify usability problems. He never (not once) tried to find out how good they really were when compared against actual usability problems. Nevertheless, through his large number of publications, he has convinced just about everybody that they are a cheap way to build effective systems "at a discount"!

A good evaluation of human-computer interactions is very difficult to do, requires considerable expertise, and in many cases the final payoff can be very low. As I mentioned last month, heuristic evaluators can end up missing some serious problems, and causing designers to go to the expense of making many changes that make no performance or preference differences at all. In fact, some of the new design changes most likely will introduce new usability problems.

I believe that the research is clear on this, but unfortunately, it is not consistent with trying to design and develop websites quickly in a real-world environment. This poses a serious problem for usability specialists. Computer professionals believe that they are getting much more from the typical heuristic evaluation than they really are. This is important because it is only after they see the fallacy of relying so heavily on heuristic evaluations that they will change the way they approach usability in systems. We cannot move forward, we cannot create more usable systems, as long as most designers and managers believe that heuristic evaluations are providing more information than they really are.

I suspect that the limitations with heuristic evaluations are understood by few, and for this reason they will continue to be used almost exclusively as the main test for system usability. Unfortunately, it is like debugging a program by looking through the program code, rather than running the code on a computer. Looking for coding problems is fast, and works sometimes, but none of us would rely on it as the major means for finding coding errors. But that is exactly what we do with heuristic evaluations.

The Pragmatic Ergonomist weighs in on heuristic evaluations.

We have a furor of opinion on heuristic evaluations. During my 25 years of user interface experience, I have seen dozens of attempts to solve usability with a "simple 10 step" process. It cannot be done. There is no substitute for a systematic design process that includes contextual inquiry with representative users, task design, interface design by experts, and several cycles of usability testing. It is hard work, but the only way to succeed.

I agree that the traditional heuristic evaluations have been sold as an alternative to good design practice. It is no surprise that this very quick and inexpensive process has so little value. But these low-level heuristic evaluations are different than the rigorous "Expert Reviews" done by the usability professionals at HFI (and by other top usability experts).

When we do an Expert Review at HFI it can take as long as two weeks, and can cost up to $40,000. We have a systematic process of walking through each website (similar to Bob's "algorithmic evaluations"). [Note: Bob will discuss algorithmic evaluations in a future newsletter.] We have hundreds of pages of specific research-based design requirements that we consider. Our experts can identify some usability issues that cannot be found even with the best Performance Tests. So a heuristic evaluation may be of limited value, but a professional Expert Review can be a very cost-effective evaluation technique. When we begin with new systems, I almost always recommend that an Expert Review be conducted first, all issues resolved, and only then do we conduct Performance Tests (why use the more expensive Performance Tests to find problems that we can find by an Expert Review?).


Message from the CEO, Dr. Eric Schaffer — The Pragmatic Ergonomist

Leave a comment here

Subscribe

Sign up to get our Newsletter delivered straight to your inbox

Follow us