Some researchers have wondered why a few credibility issues in particular studies (see Marc Hauser in psychology and parts of the IPCC report in 2007 and anything at all related to cold fusion in physics) would damage the image of researchers across an entire discipline.  It's plain old psychology.

Positive evidence can instead be a negative - if it's weak.   When it's weak, positive evidence is worse than no evidence at all, according to results from Brown University psychologists.   

They call the phenomenon "the weak evidence effect."

Consider the following statement: "Widespread use of hybrid and electric cars could reduce worldwide carbon emissions. One bill that has passed the Senate provides a $250 tax credit for purchasing a hybrid or electric car. How likely is it that at least one-fifth of the U.S. car fleet will be hybrid or electric in 2025?"

That middle sentence in their study is the weak evidence. People presented with the entire statement, or similar statements with the same three-sentence structure but on different topics, answered the final question lower than people who read the statement without the middle sentence. They did so even though other people who saw the middle statement in isolation rated it as positive evidence for, in this case, higher adoption of hybrid and electric cars.

"It's not a conscious choice to behave this way," said post-doc
Steven Sloman. "When people are thinking forward in a causal direction, they just think about the cause they have in mind and the mechanism by which that would lead to the consequence they have in mind. They neglect alternative causes."

In other words, give people a weak reason and they'll focus too much on it.  Hence, supportive but weak evidence seems to work against belief in a prediction.

And it occurs not just in political scenarios, but when real money is in play.   In two of the five experiments  the authors asked participants to bet on an outcome, such as whether Republicans would retake the house in 2010 or whether milk in the fridge would spoil by a certain date. The participants could either opt to receive $10 no matter what happened, or could take the risk to receive $30 if the predicted outcome came to pass. Some were shown weak but positive evidence (e.g., a GOP candidate in a close race received an endorsement; the power to the fridge was out for 30 minutes) and some were not. In each experiment those who saw the mildly reinforcing evidence were less likely to take the risk that the prediction would come true.

The authors say the effect might help explain why other researchers have found a peculiar pattern in consumer behavior: Sometimes adding a feature or promoting a product can make consumers less likely to buy it.   And they believe it may also explain why people have more trouble supporting sweeping policy proposals like ObamaCare even if they support individual initiatives within it. 

They say the effect is hardly an inevitable thought pattern. People in roles ranging from juror to scientist to investor to homebuyer often factor multiple pieces of evidence into their thinking.

"People have the potential to be good researchers if they have enough incentive to be," Sloman said.