e-newsletter
past issues list

UI Design Update Newsletter – May, 2003

Insights from Human Factors International

grey line

In This Issue:    
     
Getting Your (Intended) Vote to Count   Kath Straub, Ph.D., CUA, Chief Scientist of HFI, and
Susan Weinschenk, Ph.D., CUA, Chief of Technical Staff for HFI, look at the usability issues involved in the United States voting system.
     
The Ergonomic Pragmatist   Dr. Eric Schaffer, Ph.D., CUA, CPE, Founder and CEO of HFI offers practical advice.

 Getting Your (Intended) Vote to Count
     

 

As the announcements for 2004 candidacy become more frequent, it may be time to revisit the challenges of usability within the voting process. It is estimated that 4 to 6 million votes were "lost" in the controversial 2000 Presidential election. (What is; What could be – Fast Facts, CalTech/MIT Voting Technology Report, July, 2001.) Of those an estimated 1.5 to 3 million votes were lost to registration mix-ups, 1.5 to 2 million votes were lost to faulty polling equipment and confusing ballot design and up to one million votes were lost due to polling station policy problems. An unknown number of absentee ballots were lost or mishandled.

     
Deconstructing the voting process  

All of these problems can be directly linked to usability problems:

  • Registration mix-ups

    Polling workers had difficulty mapping voter ID with voter status, or data collected/entered incorrectly at the voter registration stage caused confusion at the voter check-in stage.


  • Faulty polling equipment

    Poorly constructed or worded instructions
    resulted in problems such as improper machine setup, problems with tallying or reporting votes.

  • Confusing ballot design

    Inappropriately designed ballots
    because individuals without design or usability training are tasked with this important job. County voting officials – albeit with good intentions – prioritized typographic aesthetics or efficient use of white space over basic cognitive processes like gestalt grouping tendencies. Remember the infamous Butterfly Ballot?

    Add to these unfortunate design decisions the social pressure to vote independently and quickly. (There are laws about how fast you need to complete your vote in some states.) As such, voters – particularly older voters – may feel self-conscious about asking for help.

  • Polling station policies

    Individuals attempting to exercise their right to vote were prevented from doing so because poorly constructed or ambiguously written standards of practice resulted in too much variation in identification requirements.

While much ink has been spilled arguing for the need of usability testing for the specific voting apparatus, it appears there is a need to reassess the whole voting system to evaluate and improve the various task processes, the human-voting machine interaction and both human-machine and human-human error recovery strategies.

     
Leading edge technology: one step forward, two steps back…  

The most public problems in the 2000 voting process occurred with the interpretation of the punch card ballots in Florida. Many have recommended that the outdated balloting systems be replaced with more up-to-date Direct Recording Electronic Devices (DREs). DREs are electronic voting machines that look like tablet PCs. To the technologically savvy they appear more usable because they support more direct manipulation, such as touch screens. The flexibility of on-screen presentation allows for significant improvements in voting accessibility: Typographic elements such as font size can be modified on a by-need/by-voter basis. DREs may also simplify the cognitive load of the voting process by segmenting elections decisions into a series of discrete, sequential decisions (or screens) rather than the current practice of presenting all of the election propositions on a single, somewhat overwhelming ballot. Roth (1998) observed that DRE voting was more systematic, following the traditional Western reading order from left to right and top to bottom compared to a random pattern on the mechanical level ballot.

The adoption of new voting technology appears to be driven by cost and perceived advancement, so DREs are rapidly replacing mechanical and optical voting apparatus: 4 times more voters were faced with electronic voting machines in 2000 than in 1980. The American public is just now really embracing electronic commerce. Are we ready for electronic voting?

     
Promise versus reality and the need for usability testing  

The promise of DREs is great. However, the usability pressures on newer technology are also great: the detailed design of the actual voting screens will have a significant effect on the accuracy with which these new devices accurately capture the voters' intention. Proactive usability analysis and testing becomes even more important as counties and precincts move to adopt the more advanced technologies. To date, the few published studies that address ease and accuracy of voting are not encouraging.

In the wake of the 2000 election, Information Design Journal reprinted Roth's (1998) now-seminal voting interaction studies. She reports two observational studies with participant voters of different ages, using a touch screen versus lever mechanism voting machine (Study 1) and various combinations of Punch cards/paper ballots and touch screens (Study 2). The studies did not collect voter intention. Instead, voting success was equated to submitting a complete ballot (that is, voting on all possible candidates/issues). Several factors influenced voting success:

  • Height of display,
  • Ballot Illumination,
  • Task progress cues,
  • Information organization & grouping,
  • Type size,
  • Use of "Plain English",
  • Increased time pressure undermined intention-action match.

Reads a lot like a basic usability factors checklist, doesn't it?

     
Who did YOU (mean to) vote for?  

More recently, researchers at the University of Maryland evaluated the DRE that is to be adopted by a number of counties in Maryland (Bederson, 2003). In that report, Bederson and colleagues present findings of both an expert review and a (limited) field study evaluating the usability of the voting apparatus. Their converging evaluations uncovered several usability problems which significantly undermine the likelihood of voters casting a complete and intended ballot. They identify issues such as general system failure, voter-card interaction failure, layout, affordances, text size, task flow, error recovery, visual design hierarchy and monitor glare (a particular problem for aging voters-one of the nation's most active voting constituency).

Voter's confidence in the interaction is also an issue: exit polls reflect that voters are not universally confident that the machine records the vote that they intended or attempted to cast. In exit interviews in Bederson's study, ten percent of participants stated that they did not feel confident that the ballot they had cast reflected their intended vote. In the state of Maryland, that would translate to 171,706 actual voters who were not confident about the accuracy of their cast ballot (Maryland State Board of Elections). Fully 8% of participants reported that they somewhat or did not trust the voting system. Based on their expert review of the technology and the field studies, Bederson's research team suggested that the polling places should:

"...provide the user with a printed record of the votes electronically recorded. Before leaving the polling place, the voter would be required to certify the contents of the paper record and place it into a ballot box..."

The University of Georgia's Carl Vinson Institute of Government quarterly Peach State Poll asked Georgia voters how confident they feel about the electronic voting systems now in place in Georgia. Press reporting of the survey focuses on the increase in confidence that voters have in the machines in general: in 2001, 56% of voters were Very Confident or Somewhat Confident that their vote was accurately counted. In December 2002, 93% of voters reported that confidence level. While that change may seem optimistic, the reverse of that that statistic – that 7% of voters were Not Very Confident or Not at all Confident that their vote was accurately counted should also give readers pause: roughly 88,400 active voters across the state of Georgia did not feel confident about the accuracy of their vote.

Looking closer, voter confidence varies widely on race lines. While 79% of Caucasian voters were Very Confident that their vote had been recorded properly by the new electronic voting system, only 40% of African-American voters felt that way. This disparity in voter confidence may reflect a variety of factors (e.g., confidence with the interface interaction, the impact of the digital divide, etc.). Critical to the human factors community, this disparity along racial lines highlights the importance of recruiting testing participants that truly reflect and represent the end user population in order to get accurate and meaningful test results.

     
In 1960, JFK beat Nixon by less than one vote per precinct…  

Studies assessing voting efficacy, defined as casting a complete ballot that reflects the intended choices, are critical to establishing the usability of the various voting tools. Is it truly efficacious to embrace the emerging technology? Work by the CalTech/MIT Voting Project suggests that migrating to electronic balloting may be somewhat premature.

Their work reports longitudinal trends in residual voting errors. Residual voting errors occur when a ballot fails to register a vote for any reason. This may include failure to indicate a selection, indicating multiple selections or (in the case of paper ballots), stray marks that mean the ballot cannot be processed. They report residual voting error statistics for a range of voting technologies, including paper ballots, lever machines, punch cards, bubble (optically scanned) ballots and DREs. In their analysis, the residual voting rate of punch card methods and electronic devices is 50% higher than the rate for manually counted paper ballots, lever machines and optically scanned ballots.

Of particular interest is their additional analysis of the incidence of residual voting errors for counties that have recently switched voting technologies. Here the researchers are provided a unique naturalistic observation opportunity to compare residual error rates of the newly adopted technology with those of the replaced technology. This allowed them to determine if updating increased the likelihood of successful voting.

Comparing the new technology to the residual voting error baseline of lever voting machines (the most prevalent equipment), they predict the change scores derived below. In this table, positive numbers indicate that the newer technology results in a higher residual error incidence than lever machines.

Projected Change in Residual Voting Error Machine Type
Paper ballot (v. Levers)
–0.55
Punch Card:Votomatic
  1.32
Punch Card: DataVote
  1.24
Bubble ballot
  0.11
Electronic (DRE)   0.90

Note that the prediction algorithm takes into account the fact that DREs protect voters from over voting. Unlike paper/pencil ballots, DREs do not register over-votes. Based on the findings of the Caltech/MIT Voting Project report, only a switch to traditional paper-and-pencil ballots reduced the residual voting error. In contrast, counties that switch from levers to DRES can expect a significant increase in residual voting error: approximately 1% of all ballots cast will be unusable.

Their projection that DRE adoption increases residual voting error is validated by actual county return data: Precincts adopting DRE technology reported an increase in their residual voting error. Precincts that kept their lever technology or adopted bubble (optical) ballots reduced their residual voting error.

The authors suggest several possible causes for this increase in uncountable ballots. They include the technology learning curve, the voter's learning curve, inadequate administrative attention to care and maintenance of voting apparatus and the intimidation factor for less technologically sophisticated voters.

     
So what now?  

All in all, these findings suggest that if America actually IS ready for an electronic polling place, technology to support that migration is clearly not. Basic interface and task flow usability problems that directly undermine voting efficacy surface in both the hardware and interface design of the publicly evaluated DREs. These challenges clearly reflect screen design challenges that are unique to DREs.

As the task of voting is made simpler through technology, it is important that the county officials responsible for selecting and implementing voting tools become familiar with usability issues and evaluation techniques. Cognitive walkthroughs, heuristic reviews and – most critically – usability testing of representative users should be employed to evaluate and iterate on the designs before the next election.

     
Lawmakers take action on voting issue  

On May 22, Mr. Holt submitted the Voter Confidence and Increased Accessiblity Act of 2003 (H.R. 2239) to the 108th session of the House of Representatives. Among other things, this bill stipulates in Section B that:

(i) The voting system shall produce a permanent paper record, each individual paper record of which shall be made available for inspection and verification by the voter at the time the vote is cast, and preserved within the polling place in the manner in which all other paper ballots are preserved within the polling place on Election Day for later use in any manual audit.

(ii) The voting system shall provide the voter with an opportunity to correct any error made by the system before the permanent record is preserved for use in any manual audit.

and then later...

All software and hardware used in any electronic voting system shall be certified by laboratories accredited by the Commission as meeting the requirements of clauses (i) and (ii).

A post hoc feedback/error checking method is not quite the solution we, as usability professionals, would recommend. However, this legisation does indicate that the stakeholders recognize there is a problem...which is typically the first step toward the right track.

     
References  

Bederson, J. Herrnson, P. and Neimi, R., (2003), Electronic Voting System Usability Issues, ACM Computer Human Interaction Conference, Ft. Lauderdale, Florida, April 5-10.

Roth, S. (1998), Disenfranchised by Design, Information Design Journal, 9(1). Reprinted electronically:

CalTech/MIT Voting Technology Project Report: Residual Votes Attributable to Technology (Version 2), March 30, 2001.

Maryland State Board of Elections, 2002 Gubernatorial General Election - Voter Turnout - Statewide.

Georgians Express Confidence in New Electronic Voting System, Carl Vinson Institute of Government, University of Georgia.

Credit for Voting Report, Georgia Secretary of State Elections Office.

 
 The Ergonomic Pragmatist
     
   

I can just see it now! Usability tests for every ballot! This is not going to happen. But this is a typical example of a huge application that urgently needs usability engineering. We can't do it with a usability practitioner for each ballot. Instead we must design for the overall case. We must create standards and validate that they work. We may need training for the ballot designers. But we need a streamlined and cost-effective approach...just like all the other large applications and environments we face.

grey line
Previous discussion on this issue  

Palm Beach County Ballot Illustrates Usability Problem, Professional review by Hal Miller-Jacobs, Ph.D., CPE, Managing Director, HFI.

The Usability of Punched Ballots, UI Design Update Newsletter, November 2000.

grey line
Putting Research into Practice seminar
grey line
Suggestions, comments, questions?
HFI editors at

past issues list

e-mail a friend