Banner
    Do Biases And Analytical Improvements Of Polls Help Predict Elections?
    By Hank Campbell | November 9th 2012 02:00 PM | Print | E-mail | Track Comments
    About Hank

    I'm the founder of Science 2.0® and co-author of "Science Left Behind".

    A wise man once said Darwin had the greatest idea anyone...

    View Hank's Profile
    The most startling thing to me about the election of 2012 was how spookily accurate polls were.  Social scientists in one camp want to dismiss determinism while the other camp has biology-envy but either the deterministic side got a big boost on Tuesday or the opposing sides in this election were so entrenched there was virtually no reason to vote, other than to see who had the best Get Out The Vote campaign. As I discussed in How Accurate Are Those Political Polls?, that is where the magic happens.  Could polls predict how successful a Get Out The Vote campaign is?

    Maybe. I know one party is scrambling to see what went wrong.

    What we have seen in the election's aftermath is the kind of creepy cult-of-personality fetishization of Nate Silver that we also see about 'medical' marijuana among Democrats; basically, people will infer a whole bunch of authority without a critical examination of the data. Now, some of that is explainable, at least among scientists. Silver writes for the New York Times and Republicans have declared that the NYT is always and forever in Camp Democrat so science academia is going to the defense of its party.

    In the case of Silver, though, Republicans are only somewhat right - he is in Camp Obama not blanket Democrats. He went into polling because Obama was 'trailing' Clinton in polls and he knew what anyone with a clue knew - people with no name recognition trail in polls and so those are not telling a real story.  Silver came up with an analytic improvement, the 'lean' of polls to try and paint a clearer picture of polls. It made Obama look a lot more favorable but that is not going to get a baseball stats guy a blogging gig at the New York Times.  Getting 49 of 50 state results in elections is and he did just that in 2008.

    But was he really better than just plain old averaging? I was skeptical that state polls were accurate this year, I lost a $50 bet being a Luddite and believing Americans were not as polarized and entrenched as polls showed, because I knew they were oversampling. It didn't matter who they were oversampling, if Republicans wanted to insist they were oversampling Democrats, and vice versa, that was fine by me, they just had to be doing it and I win money - because all of the averagers used the same data. No one was betting against President Obama, the polls were not that inaccurate, but surely the independent people of Virginia or Florida or Colorado were going to defy determinism. I thought.  

    Nope, the people of Ohio voted against the guy who didn't want to give them a bailout and everyone else also voted exactly as expected: We squared off old people versus young, marrieds versus singles, atheists versus religious and white versus black. Cartoons have more subtlety than our election results.

    That's all well and good. But back to the core issue; did a whole bunch of secret sauce make the difference?  While partisans in science were deifying Nate Silver because right-wing pundits were against him, the actual issue of accuracy was left behind.  Molecular biologist Prof. Sam Wang does not write for the New York Times but had a stronger prediction for re-election than Silver had, while not buying into his model. Political scientist Prof. Drew Linzer of Emory University nailed the electoral votes when Silver was still off by 38. Yet they got little attention.

    Clearly, if you want to get attention for accuracy in the science community, you need to be criticized by Karl Rove first.

    But let's look at the data.  If Florida was the only battleground state, some people got it right and some did not but clearly everyone won; virtually the entire world was right on 49 of 50, even Europeans betting on Intrade. Wang was wrong on Florida November 6th, for example, while Silver was right - except 2 days earlier Silver had the same prediction, that Romney would win it.  It isn't model superiority, unless we are to believe that so many people in Florida were undecided until election day, it was just winning a coin toss.

    If accuracy matters, let's glorify the accurate.  Wang was right on 10 of 10 close Senate races, Silver only 8.  One state in the national election was different, though not at the time of my bet November 4th - Silver had Florida for Romney then.  Well, accuracy doesn't need to be based on who writes for a New York newspaper.  If we want to know the best model, a quantitative measure of prediction error using the Brier score, the average score for 51 races (states plus Washington DC) which rewards correct-ness and high confidence,  showed Wang being the best, 0.97 versus 0.964.

    Hey, those are both still really good.  But if you have to pick between 0 and 1 that I will puncture popular partisan mythology masquerading as science fact, bet on 1.

    Here is hoping everyone is a lot more wrong in 2016 - the only thing polls did not get right this year is how many votes Charles Darwin would get in Georgia. America needs more nuance and creative thinking about our issues.  If representative polling accurately predicts that no one is moving off of their pet positions for any reason, we don't need elections.