A Vote-Count Scale Tale: In the Virginia Attorney General Vote, How Small is Small, and Should Tied Votes be Do-Overs?

The initial official vote tally for Virginia’s Attorney General race is in, and here it is:

 

Candidate        Votes
Obenshain     1,103,613
Herring     1,103,777
Write-In        4,926
Total   2,212,316

 

The difference between the two candidates is 164 votes. Out of the total votes cast, that amounts to a difference of .0074130%.

That percentage is hard to grasp, isn’t it? So how about we state this as parts per million? In those terms, the difference is 74 parts per million. And that makes sense, right? Because, out of something a bit larger than 10% more than two times a million votes, the difference is something a bit larger than 10% more than two times 74.

We can also think of this in terms of cities. The population of Houston, for instance, is pretty close to the number of votes cast; its population, according to the 2010 census, was 2,100,263 — a smidgen less than the votes cast (if a “smidgen” can be 112,053).

Imagine, then, that the entire population of Houston casts a vote, and manages to split itself right down the middle except for the votes of 82 people. That’s what happened in Virginia.

What’s that you say? Why 82? Why not 164?

Think about it: this sort of analysis involves a zero-sum game, in the sense that, as we are thinking about it here, each vote not cast for the person with the greater number of votes switches to a vote for the person with the lesser number of votes, which means that each switch of a vote bringing the count closer to even-stephen is a -1 for the greater vote-getter and a +1 for the lesser vote-getter. So with 82 switches we end up with a -82 and a +82, bringing the tally for each candidate to 1,103,695.

So imagine that: all of Houston voting (every man, woman, child and infant, as we’ve set up this example) in an election, and the winner of the vote wins the vote by 82 people out of the entire city of 2.1 million.

My hunch is that more than 82 people out of 2.1 million would accidentally vote for the person they did not want to vote for (especially if children and infants were part of the voting . . . ).

*  *  *

There is only so much precision in systems. In B-school we learned about this by, believe it or not, counting the number of M&Ms in a small bag of M&Ms (plain, not peanut, if you have to know). Of the twenty people in the class, as I recall, maybe 15 people had bags that had the same number of M&Ms (say, 30), maybe one had a real outlier number of M&Ms (say, 25), and then the rest either had one or two more or one or two fewer M&Ms in their bags. Then we charted the distribution and saw that it formed a pretty nice bell curve which, with more students counting more bags, would have, we were told by the statistics powers that be, formed a truly beautiful bell curve.

The point here was that a machine was filling those bags, attempting to do it precisely the same way every time, with no human error involved (probably . . . ) and yet it could not be perfect. Variation crept in (though in predictable ways, as shown via the bell curve).

*  *  *

I don’t know who received the greater number of votes in the VA AG race. And I am here to tell you that anyone who thinks they do know is flat-out wrong.

What I do know, and with virtual certainty, is this: the numbers in the tally up above are wrong. There’s not a snowball’s chance in hell that they actually reflect the reality of the votes cast by the 2-plus million Virginians who up and voted. Systems just are not that precise.

Nonetheless, we are about to see a recount of all those votes, and I’ll bet you dollars to donuts (without transfat, please) that the numbers at the end of that recount will not be the same as the numbers above, and that those newer numbers, too, will not accurately reflect the underlying reality. And the winner might well change. Does anyone think that a vote where two different counts lead to two different results is a model of vote-tallying excellence?

*  *  *

That does not mean, though, that we shouldn’t be trying to make our voting systems more precise.

The M&M operations folks responsible for those bag-filling machines are constantly trying to improve their process (continuous improvement was the catchphrase in the mid-90s) and I’ll bet they’ve succeeded, so that today there is considerably less variation in the numbers of M&Ms in those bags than back nearly 20 years ago when us B-students did the counting.

And we’re not talking about M&Ms here, folks. Voting is on an entirely different plane. We should be making our voting system more precise, more uniform and more auditable because it goes to the heart of how we organize ourselves. We use voting systems to vote, and the good-better-best ideas, as determined by our votes, survive-thrive-dominate while all the other ideas decline-fall-fail.

* * *

This country experienced a deep wound in November and December of 2000. Frankly, I don’t think that wound will ever fully heal for the half of the country that thought the other guy won. The Supreme Court was deeply wounded by it too — though the reputations of James Baker III and Warren Christopher were zero-sum see-sawed, with the former getting the +1 and the latter getting the -1 (those Reagan guys were always excellent at making those Carter guys look like light-weights).

What might the world look like right now had either O’Connor or Kennedy switched her or his vote? Talk about a zero-sum see-saw!

We’ll never, ever know.

* * *

So here we are, now 13 years later, and we have done very little to fix this system — very little to fix our overall voting system.

It’s an information flow task — a big information flow task, admittedly, but a straightforward information flow task is all that it is (I am not talking about voter apathy; that is an entirely different problem, though one that can be positively impacted by having better voting systems).

It is one we can fix if we want it fixed. I fear that some do not want to see it fixed. I hope I am wrong.

*  *  *

Here’s one last thought.

The recount we are about to experience will not be as horrendous as the one 13 years ago — gosh, how could it be? — but it figures to be a bloody mess nonetheless. Lawyers, guns and money basically.

Why have it in the first place? How about we have a do-over instead?

My thinking is this: given that a vote in which the two nominees are within 74 votes per million is, for all intents and purposes, a tie, we should consider a system for defining what a tie is, and for then resolving ties.

For instance, how about we define a tied elections as one in which, say, at least 100,000 people voted and the margin between winning and losing was less than, say, 500 parts per million (that’s .05%). And how about if the process for resolving a tied election is that one week afterwards we do it over again?

As in: leave the machines where they are, and let’s see how big the turnout is for the re-do!

 

 

Categories:
Tags:

Leave a Comment

Name
Email
Website
Message