Whether that makes political science or the peer review system look worse will be a matter of debate.
The facts are that a political science paper claimed it was able to change minds on a polarizing issue like same-sex marriage in just minutes. Maybe Science magazine editors took down their skepticism filters during whatever passes for peer review of humanities papers, it is hard to know, but because cosmic humanities claims are an easy sell to most audiences and Science is a world-class magazine and the conclusion endorsed the right liberal values the results eventually got covered by This American Life and then...
...well, by then the conservative part of academia had already kicked in.
Naturally a bold claim like that if we just talked to each other there would be huge swings in support for gay marriage was going to try and be replicated. U.C. Berkeley and Stanford scholars tried to do just that. They couldn't get as many people to participate as the original paper did, even though they were also paying money, and in going over the paper it began to look like the results were a little too perfect. The changes in attitudes on gay marriage and abortion in the original data looked great, no meaningful deviation from the distribution. In science, that would be odd but it seemed to have sailed right by peer review in Science. The other scholars smelled blood in the water.
They confronted the scholar who did the actual work, who denied falsifying anything but somehow couldn't produce the actual surveys. The Big Name on the paper, Professor Don Green of Columbia, was then told of the problems and asked Science to retract it, even though he still wants to believe it, telling Ira at This American Life, "just because the data don’t exist to demonstrate the effectiveness of this method of changing minds, doesn’t mean the hypothesis is false", which is probably frustrating to people in political science who might like that 'science' word to actually be evident somewhere. Should Green have put his name on something because it matched his self-identification? Probably not. Should peer review have caught it? Also probably not. The humanities, like the social sciences, are pretty fuzzy methodologically so peer review isn't meaningful.
But these hoaxes happen in science too. A few months ago I wrote in the Wall Street Journal about the importance of replication and how even the NIH was not sure of the validity of a lot of its $30 billion per year in studies because so many studies couldn't be validated. Corporations have no more confidence in a lot of their work. And I noted that PNAS had published a paper that also had fishy, perfect pictures and conclusions and a case where a researcher, in that instance a biology professor at Berkeley, had also refused to turn over any data. But he didn't have to, PNAS let him hand-pick his reviewer, his friend in the same department at Berkeley, a friend whose wife was the chair.
Though no one was able to replicate the results (they have still never been replicated), and the scientist behind it refused to provide any data (if it exists), even to the EPA, which convened a special panel on the finding because it was so provocative, the paper was never withdrawn, like this one was.
So Professor Green may feel bad for being shown to have not critically looked at the data in a paper he put his name on, but we still know a political science professor has more scientific integrity than a whole group of biologists at Berkeley.
- My Comment To EPA Recommending Tyrone Hayes Not Be Admitted To The FIFRA SAP
- JAMA Retracts 6 More Brian Wansink Articles
- Coronavirus Treatment Retraction Shows The Life Sciences Are Too Often ‘A House Built On Sand’
- Retracted Papers Stigmatize, Jeopardize Solid Research In Related Fields
- If The Data Is Properly Framed, U.S. Scientists Are More Likely To Engage In Fraud