[Dear readers â this article is the latest in our âShaky Political Scienceâ series. In our 38 page report, âShaky political âscienceâ misses mark on ranked choice voting,â released in early December, we examined over 40 studies and found that many were sloppy and a discredit to political âscience.â These studies suffered from puzzling research methodologies, poorly constructed surveys and simulated elections, cherry picked data, and faulty analyses that often were contradicted by results from real world elections (a link to our full paper is here). Here are links to the first two articles in our series, the first one summarizing the overall results of our report, and the second one analyzing a study co-authored by University of Minnesota academic Larry Jacobs, which was one of the most error-prone of all the research we reviewed in our report].
In this third article in our âShaky Political Scienceâ series, we shine a spotlight on two studies:
Overvotes, Overranks, and Skips: Mismarked and Rejected Votes in Ranked Choice Voting by Stephen Pettigrew and Dylan Radley. Social Science Research Network, 2023 (number 26 in our paperâs listed studies)
and
Deficiencies in Recent Research on Ranked Choice Voting Ballot Error Rates by Alan Parry and John Kidd. Institute for Mathematics and Democracy, 2024 (number 27 in our listed studies)
We are pairing these two studies because the Parry and Kidd study (the second one listed) stands out positively for a number of reasons, not the least because it is a rare example of academics holding other researchers accountable for their flawed research, in this case the Pettigrew and Radley study listed first above.
In their study, researchers Alan Parry and John Kidd of Utah Valley University analyzed two studies of RCV elections and showed how these studies had serious methodological and analytical flaws. One of these studies, the above listed âOvervotesâ study by University of Pennsylvania political scientists Stephen Pettigrew and Dylan Radley, broadly asserted that when voters rank candidates by order of preference, it greatly increases ballot error rates. Indeed, claimed Pettigrew and Radley, uses of ranked choice voting caused ballot marking errors to increase by ten times.
But when Parry and Kidd examined the Pettigrew-Radley study and data, they found these claims to be baseless and misleading, and also presented largely out of context with little to no comparison of RCV to other electoral methods, such as plurality voting, i.e. single-choice voting, which is the most widely used method in the US. Yet the Pettigrew-Radley study has been used extensively by anti-RCV activists to oppose RCV, most notably against statewide RCV ballot measures in Colorado, Nevada and elsewhere in 2024. This flawed research was cited widely in traditional and social media, and a Nevada television ad opposing RCV used the Pettigrew-Radley study to further distort their claims.
Previous to the Pettigrew-Radley study, several other studies had indicated that ballot errors in RCV elections largely follow the same patterns as errors in non-RCV elections, with RCV marginally higher in some races (Kimball and Anthony, 2016), (Coll, 2021), (Neely and McDaniel, 2015), (Maloy and Ward, 2021) and (Maloy, 2020).* However this study by Pettigrew and Radley claimed that RCV actually increased ballot error rates by large percentages, and even more among certain demographics of racial/ethnic voters.
Pettigrew and Radley examined cast vote record (CVR) data from nearly 3 million actual ballots from elections in Alaska, Maine, New York, and San Francisco. In their findings they claimed that 1 in 20 RCV ballots â 4.8% âwere âimproperly markedâ in some way. The authors defined the ways that a ranked ballot can be mismarked as a voter overvoting, or overranking (i.e. duplicate ranking) or skipping a ranking. But the major theoretical hurdle with this approach is that an improperly marked ballot is not the same as making an error so egregious that it invalidates a voterâs ballot.
For example, skipped rankings are inconsequential, since RCV software is programmed to count the next candidate ranked after the skipped ranking. An overrank/duplicate ranking means a voter has marked the same candidate for two or more rankings, which does not help that candidate by giving her/him a second vote but also does not spoil the voterâs ballot; duplicate rankings have a null effect. Of the three types of ballots examined, only an overvote (picking two different candidates for a single choice) would invalidate a ballot and disenfranchise a voter.
But overvotes occur in every election, including non-RCV elections. For example, in Californiaâs June 2012 US Senate primary contest, which used the typical âone-choiceâ plurality method and not RCV, there was more than five times as many overvotes among San Francisco and Oakland voters than in the RCV mayoral elections in those cities. In San Franciscoâs June 2018 election, there was seven times more overvotes in the California governorâs race than in San Franciscoâs mayoral race with RCV.
In short, there is no âthereâ there. As Parry and Kidd pointed out, the other two types â skipped rankings or duplicate rankings — âmost likely indicate a political expression rather than voter confusion.â In fact, Parry and Kidd found that 90% of those âimproperlyâ marked ballots were actually counted as intended by the voter. Pettigrew and Radley also mentioned this fact, but it did not dawn on them that this reality significantly undermined their very thesis. Instead of 5% of voters being negatively impacted by their voting choices, far less than one percent of voters were impacted. âClaiming that one in 20 ballots were problematic exaggerates that the issue is over 10 times larger than it actually is, and as such, is remarkably misleading,â said Parry and Kidd.
Indeed, the overall average rejection rate for RCV elections was 0.53% not 5%. Pettigrew and Radley then compared that to a 0.04% rejection rate for non-RCV elections, which means that the actual acceptance rate was a 99.96% for non-RCV elections and 99.53% for RCV electionsâ both extremely high numbers for valid ballot rates. Pettigrew and Radley hinged their thesis on their finding that the RCV rejection rate was indeed about 10 times larger than non-RCV. But as Dr. Kidd observed, tongue firmly planted in cheek: âTen times a tiny number is still a tiny number.â
But even this 0.53% percentage is misleading, if we examine all of the more than 400 RCV races with more than two candidates since 2004, not just the ones studied by Pettigrew-Radley. FairVote has published data showing that in these 400+ RCV races, the median voter error rate was 0.3%. Pettigrew-Radley never pointed out, for example, that voters using mail-in ballots in non-RCV elections often are daunted by much higher invalidated ballot rates, as was found by a 2020 study by MITâs Charles Stewart. That study concluded that in the 2016 presidential election, as many as four percent (one out of 25) of mail-in ballots were never counted, which is orders of magnitude higher than the amount of invalid ballots that Pettigrew and Radley found with RCV elections.
Parry and Kidd rightly concluded that âUsing sensationalized language of this type to exaggerate the impact of this difference indicates significant bias in the presentation of these results.â Indeed, study after study has shown that overall, relatively few ballots in RCV elections contain an error, and even fewer ballots are rejected â even using the limited sample from Pettigrew and Radley, more than 199 out of every 200 voters cast a valid ballot. For nearly all ballots that were âimproperly marked,â as Pettigrew and Radley have defined it, the voterâs intent nevertheless is clear enough that the ballot was able to be counted as intended.
Contrary to Pettigrew-Radley, a number of other researchers (such as Donovan, Tolbert and Gracey and Donovan, Tolbert and Harper**) found âno differences within RCV cities in how whites, African Americans, and Latinx respondents reported understanding RCVâ and âour search for race/ethnic differences largely produced null results.â
Parry and Kidd also point out the lack of comparative context in the UPenn study by Pettigrew and Radley, such as the failure to compare RCV to other electoral methods, such as plurality elections. If one factors in the massive spike in meaningful votes that occur in a RCV election due to votersâ newfound ability to rank multiple candidates, and not have their favorite candidates spoil each other, compared to the tiny amount of increased errors, there is no doubt that voters will have much more voice in their democracy with RCV. Write Parry and Kidd, âUsing RCV, more voters are able to participate in the choice of the final winner, and voters also have more freedom of political expression.â
Indeed, research from FairVote found that RCV causes an average of 17% to 30% more votes to directly affect the outcome between top candidates â thatâs 17% more effective votes in all RCV races with more than two candidates, and 30% when the tally goes to at least a second round of counting (See âWith ranked choice voting, 17% more votes make a difference,â). Parry and Kidd concluded that âthe average 17% increase in meaningful votes using RCV is considerably larger than the average ballot rejection rate of 0.53% using RCV observed by Pettigrew and Radleyâ â in fact, about 32 times larger!
Itâs important that research about RCV contextualize its impact more comprehensively, and compare RCV and single-choice voting (plurality) more carefully. Rather than providing quality research on the workings of ranked choice voting, what the Pettigrew and Radley study actually revealed was that, charitably, they do not really understand how ranked choice voting works.
Researchers Alan Parry and John Kidd deserve a new kind of political science prize â letâs call it the Truth and Accountability Prize. Thatâs because they modeled professional behavior that more academics should be engaging in â holding each other accountable for sloppy research.
What has been striking about our findings after reviewing dozens of academic studies on ranked choice voting is that not only are a number of political scientists producing deeply flawed and even inept research, but then they cite each otherâs flawed research as the academic substrate that legitimizes their own flawed research â an unvirtuous circle. One bogus study we have exposed, Jason McDanielâs Writing the Rules to Rank the Candidates: Examining the Impact of Instant-Runoff Voting on Racial Group Turnout in San Francisco Mayoral Elections, has nevertheless been cited by at least 35 other studies and continues to be cited in academic studies today.
It causes one to wonder if these political scientists, often in a rush to finish their own academic papers and submit them to journals as part of their never-ending quest for tenure and/or âpublish or perish,â actually even read the very research papers they are citing in their own work. Many of the studies we have examined have a kind of âcut and pasteâ quality to them, as the authors fill out their Introduction and Methodology sections of their papers by citing the same previous dubious research such as the McDaniel, Nolan McCarty or Lawrence Jacobs studies that we have exposed in this series. In this way, bad research perpetuates bad research.
Surprisingly, outside of the peer review process, it turns out there is limited quality control in political and social science over what gets published or even which papers are cited in new academic papers. Various websites that publish huge volumes of academic papers, such as Social Science Research Network (SSRN), ArXiv, SocArXiv, OpenReview.net and PeerJ, do not peer review submissions. Instead, they function as a repository where authors can self-publish their own research papers without undergoing a rigorous review process. Consequently, to cite one example, little-known Korean researchers claimed in July 2023, in two ArXiv publications, to have invented a room-temperature superconductor would revolutionize power grids, and the news quickly went viral on mainstream and social media. Yet by mid-August these claims had been debunked by papers from major labs.
While having a venue in which researchers can post early drafts of their research and solicit feedback has value, unfortunately these easy-publishing, blog-like venues have resulted in the spread of questionable studies and scientific results in a number of academic disciplines. Given the lax standards, these publishing platforms sometimes serve to inadvertently contribute to a dilution of academic rigor and credible research. This becomes especially problematic when, for example, SSRNâs publication process is misunderstood by the general public as the equivalent of peer-reviewed publications (we know wherein we speak, since our own paper was âacceptedâ for publication on SSRN with no review process whatsoever). Today, SSRN hosts nearly 2.6 million authors and 1.6 million full text papers, according to its website.
The hopeful goal of this article â and our report in general — is that our critique of existing RCV research will lead to much improved scholarship, better media reportage and a greater understanding of ranked choice voting and its impacts.
Studies cited in this article:
* Voter Participation with Ranked Choice Voting in the United States by David Kimball and Joseph Anthony. University of Missouri-St. Louis, 2016.
Demographic Disparities Using RankedâChoice Voting? Ranking Difficulty, UnderâVoting, and the 2020 Democratic Primary by Joseph Coll. Cogitatio Press, Politics and Governance, 2021.
Overvoting and the Equality of Voice under Instant-Runoff Voting in San Francisco by Francis Neely and Jason McDaniel. California Journal of Politics and Policy, 2015.
The Impact of Input Rules and Ballot Options on Voting Error: An Experimental Analysis by J. S. Maloy and Matthew Ward. Cogitatio Press, Politics and Governance, Vol. 9, No. 2., 2021.
Voting Error across Multiple Ballot Types: Results from Super Tuesday (2020) Experiments in Four American States by Jason Maloy. SSRN, 2021.
** Self-Reported Understanding of Ranked-Choice Voting by Todd Donovan, Caroline Tolbert and Kellen Gracey. Social Science Quarterly, 2019.
Demographic differences in understanding and utilization of ranked choice voting by Todd Donovan, Caroline Tolbert and Samuel Harper. Social Science Quarterly, 2022.
