The eye tracking lab has always been the Disney equivalent for real usability wonks. It's so cool. For years we said, there MUST be something about the data that it collects that can take usability to the next level. For years the pragmatists (including HFI's own) have pushed back. It's too complex to use. It takes too long to calibrate. The data is overwhelming to assimilate and interpret. Valid eye tracking tests require a different philosophical approach to the usability testing interaction. Eye trackers are not cheap.
Some of these criticisms still ring true. Eye trackers are still not cheap. New software algorithms make first-pass analysis of the data easy, but to effectively interpret and leverage the data it entails an understanding of perception, reading and (yawn) statistical analysis. The way usability tests are choreographed needs to change if you are going to collect valid eye tracking data. (For instance, setting up a scenario where the participant tends to look at the tester each time there is a probe undermines the validity of the scan path.)
However, the days of drooling on a bite bar through 30 minutes of calibration are over. And the physical evolution of eye trackers is opening unique opportunities to refine and hone your information architecture.
People tilt their heads when you say that eyetracking should be used to refine the information architecture. The typical response is, "You mean graphics, right? What catches the attention? How long do people linger? What do they LOOK at?"
Right. That's all true. Eye tracking data is effective – particularly when used in conjunction with click stream analytics – for assessing the attentional draw of marketing elements on a page.
But this same type of data is also very useful in evaluating and understanding the effectiveness of the information architecture. Consider the eye tracking heat map below. This heat map reflects the visual search of a single user seeking San Diego traffic information on the City of San Diego home page.
The red spots (or "hotspots") show where the user looked longest. In this picture, longest is a combination of either lingering on an area (as in the Business section in the main text) or where they repeatedly looked before making a decision (as in the navigation tabs). (Additional first and second "pass" looking data can be used to tease these two behaviors apart.)
So in their simplest form, eye tracking heat maps, like the one shown above, can be used to evaluate:
Bojko (2006) presents a study which demonstrates the value of including eye-tracking methods in early prototype testing. Her team used eye tracking methods to compare the content findability of key and frequent elements on a proposed homepage redesign of a medical professional society site against the existing site. The redesign objective was to highlight key functionality and improve the findability of critical information. The team used conventional usability methods (interviews, card sorting, etc.) to inform the redesign.
Had she evaluated only conventional usability measures (accuracy and time-on-task), the two designs would have performed roughly equally. However, in-depth analysis showed some interesting differences between the two designs at the task level.
For instance, while one core task was completed in just a few second on either design, behavioral analysis showed that the proposed redesign was much more efficient: Fixations (spots where the eye lands) were numerous and scattered on the old site, but they tended to be focused around a single, more clearly presented navigation design for the prototype site. This is not surprising, since the new design effectively reduced the number of competing and distracting elements on the homepage. Not surprising, sure. But hindsight is 20/20. Eye tracking provided clear validation for the explanation: Users' eyes wandered around less on the new design. Usability practitioners need empirical validation to move the field forward. Traditional usability testing data simply can't provide this level of interpretive insight.
A further analysis of the eye tracking data showed that the revised navigation labels also improved the site usability. Users were more confident about the meaning of labels: They looked for shorter times, they looked back and forth less, and they selected and clicked links more quickly.
Bojko uses these examples to suggest that eye tracking offers both quantitative evidence to validate redesign choices, and qualitative process insights to further refine designs. Some of the quantitative data she uses comes from conventional usability measures such as success rates and time on task. But other data, such as visual linger times and scan paths, depend on applying eye tracking methods. Qualitative data provides insight, by observing where users are looking, to identify efficiencies and inefficiencies in the task flow.
Recent improvements in eye tracking technology suggest that it may be time to start taking eye tracking seriously as a standard usability method.
Bojko, A. (2006). Using Eye Tracking to Compare Web Page Designs: A Case Study, Journal of Usability Studies, Issue 3, Vol. 1, pp. 112-120.
I agree completely with Dr. Schaffer. I think its major value is for marketing, rather than usability assessment. Given the kind of cost-benefit considerations that govern the business world, I don't see how I could justify the time, effort, and expense.
Excellent article. Visual eye movement map example very useful. Passing article to our webmasters as reference for future Home Page design implementation. Thank you for useful information.
Thank you Eric for just plain common sense.
It will be great if you can post some information on how to analyze data from eye-tracking studies. For instance, how do we know if a spot is hot because the information in that spot is perceived as important or because the user requires a longer time to understand the presented information? I suppose that such an article would be nice.
Sign up to get our Newsletter delivered straight to your inbox