To dredge the water-logged corpse up again, the scholars behind a paper (eventually published years later in PLoS One, with a few modifications) said conservatives were found to be more 'afraid' in a risk-taking task and that is why they like more social authoritarian policies. This didn't make sense to anyone who knows which party actually uses a lot more social authoritarianism to force society to obey its desires of the week; it isn't Kansas City banning everything, it's places like San Francisco and New York City. Prior papers said everyone is motivated by fear, not just Republicans, and a later one determined that liberals are just being politically correct - when they get drunk and lose their inhibitions, they become more conservative. But with a science media and social science academia overwhelmingly voting one way, there was invoked a delightful jumble of motivated reasoning, identity-protective cognition, naïve realism and a bunch of other science-y sounding terms for what your dad probably told you, bereft of any psychology degree at all - 'people believe what they want to believe'. So if a psychology paper said Republicans are scared of risk, well, the skeptical filters were turned off by people who like to believe that stuff.
Biologists dismissed the claims on scientific grounds while others just argued where they had a chance to be right - about the interpretation of the insults, I mean results. If you argue about interpretation, you can never really be wrong, it is the same subjectivity I ridiculed in Undermine Science By Redefining It , where people will choose their own definition or lump science in with their morally relative issue.
But no one bothered with the methodology - Dan Kahan, Professor of Law and Psychology at Yale Law School and proponent of his Cultural Cognition hypothesis does just that. He wonders why, many years after the problems of too many papers combining fMRI with simple, basic errors relating to causal inference were exposed, undermining the credibility of an alarming number of papers using MRI, anyone would still make the same exact errors; things like voodoo correlations and opportunistic observation.
As explained, they selected observations of activating “voxels” in the amygdala of Republican subjects precisely because those voxels—as opposed to others that Schreiber et al. then ignored in “further analysis”—were “activating” in the manner that they were searching for in a large expanse of the brain. They then reported the resulting high correlation between these observed voxel activations and Republican party self-identification as a test for “predicting” subjects’ party affiliations—one that “significantly out-performs the longstanding parental model, correctly predicting 82.9% of the observed choices of party.”It doesn't matter. As I said, for people with motivated reasoning, seeing social science and humanities scholars behind "Red Brain, Blue Brain" in the title was enough to know what they were getting. As of this writing, it has almost 28,000 reads and almost 1,900 shares.
This is bogus. Unless one “use[s] an independent dataset” to validate the predictive power of “the selected . . .voxels” detected in this way, Kriegeskorte et al. explain in their Nature Neuroscience paper, no valid inferences can be drawn. None.
If you want to believe so, this graphic shows Republicans are more scared of the unknown than open-minded, super-smart Democrats. Credit and link: doi:10.1371/journal.pone.0052970
Andrew Gelman, professor of statistics and political science and director of the Applied Statistics Center at Columbia University, is even harder on the paper in the context of talking about why post-publication peer review is often no better than the pre-publication kind, saying in a comment on Kahan's article:
Read between the lines. The paper originally was released in 2009 and was published in 2013 in PLOS-One, which is one step above appearing on Arxiv. PLOS-One publishes some good things (so does Arxiv) but it’s the place people place papers that can’t be placed. We can deduce that the paper was rejected by Science, Nature, various other biology journals, and maybe some political science journals as well.Post-publication peer review, Gelman notes, is no better because peers often just link to things without really analyzing the data and methods - and I took his criticisms of PLoS One (the other PLoS journals are not having the same issue he alleges, as have many others) at face value because he generally knows his stuff and isn't saying more prestigious journals don't put out rubbish articles as well - but that it was rejected by other journals over a period of years before being accepted in PLoS One could be telling. It could be telling us the credit card cleared and the paper survived the four items an editor has to check off to approve something. But I can't say that happened for sure, so Gelman has a fine point about post-publication peer review being a little light also.
I’m not saying you shouldn’t criticize the paper in question, but you can’t really demand better from a paper published in a bottom-feeder journal.
Again, just because something’s in a crap journal, doesn’t mean it’s crap; I’ve published lots of papers in unselective, low-prestige outlets. But it’s certainly no surprise if a paper published in a low-grade journal happens to be crap. They publish the things nobody else will touch.
Deja voodoo: the puzzling reemergence of invalid neuroscience methods in the study of "Democrat"&"Republican Brains" by Dan Kahan
Post-publication peer review: How it (sometimes) really works by Andrew Gelman
Schreiber D, Fonzo G, Simmons AN, Dawes CT, Flagan T, et al. (2013) Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans. PLoS ONE 8(2): e52970. doi:10.1371/journal.pone.0052970