About HFI   Certification   Tools   Services   Training   Free Resources   Media Room  
               
 Site MapWe help make companies user-centric   
Human Factors International Home Page Human Factors International Home Page
Free Resources

UI Design Newsletter – January, 2004

In This Issue

Creating Effective Online Surveys: Owning Photoshop doesn't make you an artist

Kath Straub, Ph.D., CUA, Chief Scientist of HFI, looks at what is required to make an online survey an effective data gathering tool.

Taking a usability survey

The Pragmatic Ergonomist

Dr. Eric Schaffer, Ph.D., CUA, CPE, Founder and CEO of HFI offers practical advice.

Creating Effective Online Surveys: Owning Photoshop® doesn't make you an artist

Have you ever – just for fun – played around with a photograph in Adobe Photoshop? It's an incredibly powerful tool. You can change lighting...the colors...the composition...even the content. (Remember the royal wedding pictures where Prince William wasn't smiling? So they doctored the photo...?) And we haven't even started talking about the dozens of filters that you can use to turn your photo into a watercolor from outer space with a fuchsia-to-lime mezzotint and a Gaussian blur for extra effect... In the hands of a graphics artist, Photoshop is an amazing tool. In the hands of a curious novice, it's a fun toy. We (I consider myself part of this group) can do some very basic things effectively. But for most novices, Photoshop becomes an unproductive (albeit entertaining) distraction because it is TOO powerful. The myriad of possibilities is distracting. Entrenched in the tools, we lose track of the fact that we lack the essential understanding of the subtleties that allow real graphics artists to do their thing.

Electronic survey tools present a similar problem. In recent years, a number of powerful survey tools have emerged on the Web. Some of these tools make it easy to post very sophisticated surveys and collect data on the Web. Distribution is cheap. Data entry, verification, and descriptive analysis can be completely automated. Just-in-time information survey help (e.g., links to definitions of terms) and multimedia resources can be embedded directly into the survey. Since contingent question presentation can be handled invisibly, there is no more: "If you answered, 'Yes,' skip to question 26b". Further, using electronic mail to invite participants to the survey, transitions participant recruiting from a pull to a push activity.

These are all very powerful tools. You can literally implement a survey in a few minutes. It is easy to forget, in this domain of apparently instantaneous productivity, that there is an entire field of knowledge that supports the design of effective and meaningful survey data collection. As with Photoshop, the power of the tool distracts us from the essence of the task.

Research on effective electronic surveys

A recent survey paper by Andrews, Nonnecke and Preece (2003) highlights the challenges of developing effective surveys. Based on a comprehensive review of the literature, they discuss several areas of concern for researchers creating electronic surveys. They include sampling and participant selection, and survey design.

Click here to provide feedback...

It is not unusual today for Web content managers to simply add a link to their home page that says: "Give us feedback. Participate in a survey." Who clicks on these links? Why?

Effective sampling – or getting an appropriately large and representative set of participants to respond – presents one of the greatest challenges for effective survey implementation in any mode. If the sample isn't representative, the survey results are not very meaningful. So this is a very important concern.

Achieving an appropriate representative random sample with a reasonable return rate is never trivial. Unfortunately, the nature of the Internet makes accessing a representative sample virtually impossible. Research suggests that individuals who participate in Internet surveys use the net more frequently and have better Internet skills than those who opt out (Kehoe & Pitkow, 1997). They are just different. While the male-female gap has largely disappeared (NUA, 2001), electronic survey participants tend to have higher incomes and education levels and tend to be more homogeneously Caucasian than the general population of Internet users. The demographic gaps (economics, age, ethnicity) between on-line and off-line populations are even larger (Yun & Trumbo, 2000).

Since there is no common registry of Internet users, there is no way to identify all possible users (sometimes called "the sampling frame"). As such, participant selection is necessarily nonrandom from the offset. Surveyors must remember that generalizing the results to off-line (or even the whole on-line) population may not be viable.

Available tools notwithstanding, knowing who you want to survey, and how you will reasonably access and get responses from that group, is critical to developing a meaningful survey. Just adding a link typically won't get you there.

Couper (2000) suggests implementing strategies such as intercept sampling to achieve a pseudo-random sample. In intercept sampling every nth visitor is invited to participate in the survey – like an exit poll. Anecdotal evidence suggests that inviting individuals to participate via a pop-up with a single "Would you...?" question followed up with an e-mail message containing a link to the actual survey yields increased response rates. It's a start.

Surveys that feel like fishing expeditions usually are

Getting people to finish the survey is also a challenge. Have you been overwhelmed in a survey by an entire page of 1-7 ratings and just bailed? Surveys that appear long and mentally intensive typically don't get finished. Andrews, Nonnecke and Preece (2003) note that it is important to make the survey appear easy and usable. They review several research-driven findings to increase the likelihood that respondents will complete the survey:

  • Make sure the survey topic is of interest to the invitees
  • Customize the survey to the target population (e.g., language, expectations of multimedia use, etc.)
  • Assure privacy and confidentiality of an individuals' responses
  • Warn participants how long the survey will take (and be honest about it)
  • Use an appropriate subject line in e-mail invitations
  • Clearly provide contact information and access to the surveyors
  • Make automated passwords easy to enter and use
  • Pilot your survey on different platforms so you know that your participants are seeing what you intend them to see
  • Separate the invitation from the survey itself
  • Send reminder e-mails to non-responders

Piloting the survey

In addition to their literature review, Andrews, Nonnecke and Preece (2003) review a case study of the development of a survey. Their goal was to determine whether effective and thoughtful design would allow researchers to elicit survey responses from 'lurkers' or non-active participants in electronic communities. (Yes, it does.) More interesting though, are the (re)discoveries these trained researchers experienced as they piloted their survey through several stages of development.

Stage 1: The Colleague Test
Candidate questions were drafted and reviewed over four iterations within the survey team.

Research team observations:

  • Writing effective questions is not easy.
  • Balancing brevity, tone, and accuracy requires feedback.
  • Redundant questions and those not directly related to the topic should to be deleted.

Stage 2: The Cognitive Test
The entire survey system was usability tested. Participants read the invitation, completed the survey and participated in retrospective interviews using a think-aloud procedure.

Research team observations:

  • Writing effective instructions, questions, and privacy reassurances requires feedback from outside the survey team.
  • Physical interactions with the survey (e.g., scrolling a long set of questions) has a significant impact on survey usability.
  • Effects of sequence and grouping are group dependent. Survey developers' intuitions are not necessarily representative of survey takers' experiences or conceptual hierarchies.
  • Typographic elements like category headings help participants map the sections and content of the survey.
  • Participants tend to move directly to the survey without reading the text. Instructions must be scannable.

Stage 3: Live Test
Pilot the survey on a small scale to validate the sampling procedure and investigate the response rate. Test-drive analysis procedures.

Research team observations:

  • You don't know exactly what data you wish you had collected until you collect some. Having an opportunity to review and make minor changes in the data-collection process in this stage can result in large differences in the effectiveness of the data collected overall.

Stage 4: Clean Up
Final grammar and format checks.

Owning Photoshop doesn't make you an artist

Creating surveys that meaningfully collect data from the right set of participants requires a keen and thoughtful effort built on an understanding of the subtleties of survey design. The strategies above provide some key insights to issues and concerns of implementation; they do not begin to touch on what it means to write effective survey questions.

Remember that photograph that you stretched and colored and filtered in Photoshop? What did you do with it? As with Photoshop, novices can pull off some trick maneuvers with electronic survey tools. But to develop serious, longitudinal data-collecting tools you need to invest time in understanding the domain, not just accessing tools.

Reference

Andrews, D., Nonnecke, B., Preece, J. (2003). Electronic Survey Methodology: A Case Study in Reaching Hard-to-Involve Internet Users, International Journal of Human-Computer Interaction. Volume 16, Issue 2, Page 185-210.

Couper, M. P. (2000). Web Surveys: A Review of Issues and Approaches. Public Opinion Quarterly. 64(4), 464-494.

Dillman, D. A. (2000). Mail and Internet Surveys: The Tailored Design Method, New York: John Wiley and Sons.

Kehoe, C. M., and Pitkow, J. E. (1996). Surveying the Territory: GVU's Five WWW User Surveys. The World Wide Web Journal 1(3), 77-84.

Yun, G. W. & Trumbo, C.W. (2000). Comparative Response to a Survey Executed by Post, E-mail, & Web Form. Journal of Computer-Mediated Communication 6(1).

The Pragmatic Ergonomist, Dr. Eric Schaffer
Eric

I have come to love remote usability methods. They allow us to sit in Mumbai and reach out to 12 different countries in one week. They let us test cross-cultural issues that were simply too expensive to test in the past. We now routinely get useful results from surveys and from much more involved usability testing. The day of remote usability is clearly dawning.

While remote methods have become practical, I continue to be astounded by the amateurish activities of colleagues who should darn well know better. We recently had a study run by one of the largest usability firms in Europe. They used our facility in India. We were astounded that they had NO idea about sample selection. They simply tested whoever was handy! I just had a conversation with usability staff from one of the world's largest software companies. They were very happy to select test subjects from attendees at their users conference. How awful! They are testing the converted and concerned. They may feel they are getting input. But they are certainly being deluded.

The famous police gun instructor Bill Jordan often said "Speed is fine, but accuracy is final". We have a similar situation. It is fine to have good survey and testing tools. But the focus and skill of the tester will make the difference between solid and applicable results, or a warm feeling accompanied by delusions and failure.




Leave a comment here

© 1996-2014 Human Factors International, Inc. All rights reserved  |  Privacy Policy  |   Follow us: