About HFI   Certification   Tools   Services   Training   Free Resources   Media Room  
               
 Site MapWe help make companies user-centric   
Human Factors International Home Page Human Factors International Home Page
Free Resources

UI Design Newsletter – March, 2006

In This Issue

When getting the job done isn't enough... How insight into users' process makes interactions more satisfying

Kath Straub, Ph.D., CUA, Chief Scientist of HFI, looks at how the types of data you collect in a usability test can effect the impact of your redesign recommendations.

The Pragmatic Ergonomist

Dr. Eric Schaffer, Ph.D., CUA, CPE, Founder and CEO of HFI offers practical advice.

When getting the job done isn't enough...

English has more words for user data than Eskimo has for snow

How many words do you have for user data?

Interface designers today are swirling within a blizzard of data. How many types of user data does your Web team collect?

Analytics such as drop-off rates, click stream logs, Google search analysis, and FAQ access provide insights into how users move through an interface (or don't). Webmaster email offers insight into the things user's are seeking in the words they can't find. User surveys can provide a broad array of data constrained only by the survey design and (sometimes) the mood of the survey participant. Talk-aloud-protocol usability testing provides a running play-by-play of users' expectations, successes, and frustrations within a specific task space.

Each of these data streams seems to provide a slightly different perspective on the user experience. Which data stream or combination of data streams is most effective to drive interface improvement and increase user satisfaction?

Do you want to know if... ...or how?

Recent work by Kelkar and colleagues suggests that the choice depends on the level of improvements you wish to make to your interface.

Their study looks at data collected from a usability test of a Web-based application (WebCT). It compares the relative impact of collecting performance data to collecting performance and process data on the resulting design recommendations.

For Kelkar and colleagues, performance data focuses on whether users can do the task. It includes measures such as:

  • Task completion rates
  • Error rates
  • Time on task

Process data is comprised of observations about what users are doing during task completion. This can include both objective and subjective measures, such as:

  • Participant self-report
  • Behavioral observation
  • Eye movement tracking

To measure the relative impact of the different types of user data, Kelkar and colleages conducted a series of Usability Test/Redesign stages.

In Stage I of the experiment, designers derived design improvement recommendations based strictly on performance measures of usability, including time-on-task, error rates, and overall task completion rates.

After the interface had been re-implemented to integrate the design recommendations, Kelkar and colleagues conducted Stage II.

In Stage II, designers derived improvements based on performance measures enhanced by process measures. The process measures including eye-movement-within-task data and retrospective review/verbal analysis of decisions and actions made during task completion.

The interface was re-implemented a second time to integrate the design recommendations resulting from the process + performance data.

A third round of usability testing measured the improvements to the user experience resulting from the Stage II recommendations.

That was easy!

Kelkar and colleagues found an interesting difference between the recommendations generated by just performance versus performance + process data.

Recommendations based on performance data resulted in an increase in task completion and a reduction of errors. This is not surprising, since the relatively descriptive performance data focused the design team on specific points of breakdown in the task flow, but provided little perspective on experience at the task level.

Recommendations based on performance + process data resulted in increased task efficiency and increased overall satisfaction with the interface.* Collecting either direct (think aloud/retrospective verbal analysis) or indirect (eye-tracking) data provides greater insight to the user's expectations and anticipated task flow. Access to the user's mental model allows designers to identify and minimize gaps between the user's model and the site-interaction model.

This work suggests that the various data streams focus designers on different opportunities to improve the user experience. If the goal is to increase both task completion and perceived ease-of-use, then the collection of process data is critical. This data provides direct insight into the users' expectations and mental task model.


* It is important to note that process data would also likely yield recommendations that resulted in reduced errors and increased task completion, but in this experimental design, the opportunity to see/evaluate those recommendations may have been eclipsed by the Stage I / Redesign / Stage II model of the experiment.

References

Kelkar , K., Khasawneh, M., Bowling, S., Gramopadhye, A., Melloy, B. and Grimes, L. (2005). The Added Usefulness of Process Measures Over Performance Measures in Interface Design. International Journal of Human-Computer Interaction, 18(1), 1-18.

Pullam (1989). The Great Eskimo Vocabulary Hoax and Other Irreverent Essays on the Study of Language

"Often the biggest value of usability work is a change in process, strategy, or psychological positioning." Indeed, beyond the realm of 'usability' this is the biggest design gap for all business activities/interactions. How might we focus more work/discussion in this area?

Paula Thornton

Reader comments on this and other articles.

The Pragmatic Ergonomist, Dr. Eric Schaffer
Eric Schaffer

It takes serious experience and care to create a usability assurance plan that really reflects business goals. I know usability teams that work just to find and reduce drop-off rates. This is fine, but it will never revolutionize the design or really substantially improve usability. But they are CORRECT in this approach. The revolution was 4 years ago and now fine tuning IS the appropriate activity.

But in general, this study really highlights how blind attention just to points of performance can limit the contribution of usability specialists. Often the biggest value of usability work is a change in process, strategy, or psychological positioning.




Leave a comment here

© 1996-2014 Human Factors International, Inc. All rights reserved  |  Privacy Policy  |   Follow us: