Banner
    Scientific Method In Decline?
    By Michael White | December 29th 2010 01:03 PM | 18 comments | Print | E-mail | Track Comments
    About Michael

    Welcome to Adaptive Complexity, where I write about genomics, systems biology, evolution, and the connection between science and literature,

    ...

    View Michael's Profile
    Jonah Lehrer in The New Yorker about the slipperiness of the scientific method:

    "The Truth Wears Off: Is There Something Wrong With The Scientific Method?"
    The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.

    But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
    The piece, dressed up in a bit of mysticism, is essentially a description of some well known (but too rarely acknowledged) biases in science: Unconscious selection of favorable data; the tendency to publish only positive results, and the effects of randomness.<!--more-->

    It's an important point, one that was first taught to me in a physics class when we learned about Millikan's efforts to measure the charge of the electron - a classic case of selection bias.

    Lehrer quotes some scientists in his article who suggest that this is science's dirty secret, one that researchers are ashamed of. But why should we be ashamed of this?

    Science is a human enterprise. Mistakes get made. Biases exist. And yet, amazingly, science still works, which is really the only justification for its existence. Science is still the most powerful approach for manipulating and predicting the physical world, period. No other philosophy comes close. With all of its flaws, with science we still manage to build nuclear reactors, create glow-in-the-dark fish, find cancers using NMR, build superconducting materials, send robots to Mars, and track the spread of new flu viruses.

    Given this track record, scientists have no need to be ashamed that, even in the absence of fraud, science is imperfect. Feynman puts it more eloquently:
    The scientist has a lot of experience with ignorance and doubt and uncertainty, and this experience is of very great importance, I think. When a scientist doesn’t know the answer to a problem, he is ignorant. When he has a hunch as to what the result is, he is uncertain. And when he is pretty damn sure of what the result is going to be, he is still in some doubt. We have found it of paramount importance that in order to progress, we must recognize our ignorance and leave room for doubt. Scientific knowledge is a body of statements of varying degrees of certainty — some most unsure, some nearly sure, but none absolutely certain. Now, we scientists are used to this, and we take it for granted that it is perfectly consistent to be unsure, that it is possible to live and not know. But I don’t know whether everyone realizes this is true. Our freedom to doubt was born out of a struggle against authority in the early days of science. It was a very deep and strong struggle: permit us to question — to doubt — to not be sure. I think that it is important that we do not forget this struggle and thus perhaps lose what we have gained.

    "The Value of Science," address to the National Academy of Sciences (Autumn 1955)
    Lehrer ends his piece with a misleading statement:

    We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
    The uncertainty of science doesn't mean that we're simply left with an arbitrary choice of what to believe. You still must follow the evidence, which, as Feynman points out, needs to be weighed because "scientific knowledge is a body of statements of varying degrees of certainty." (This is something sorely neglected in most science reporting that portrays each new paper as a sensational breakthough.)

    A recent result, especially one supported by just a handful of studies (such as the effectiveness of a drug), gets less weight than something that has been accumulating multiple lines of evidence for decades (the mass of an electron, the common ancestry of humans and chimps). Yes, in some rapidly changing fields, some results that are later overturned end up in textbooks. Other results stand the test of time (the structure of amino acids, the genetic code).

    We shouldn't be ashamed of varying degrees of uncertainty in science. I'll leave xkcd with the last word:

    Comments

    Gerhard Adam
    I don't think it's science nor the scientific method that is the problem.  Instead, I would argue that too often, basic studies are being used as evidence by which to address public policies and to embark on exploiting (or the idea of exploiting) technologies for which all the information isn't yet known.

    As a result, science is being blamed for what is really the use of incomplete data. 

    Another problem though, is that often data is being interpreted well beyond what has actually been studied.  While a particular phenomenon may have been examined, that doesn't automatically lead to globally applicable criteria, nor does it automatically validate more far-reaching theories.

    Most systems are too complex for the simplistic reductionist answers that get get posed as questions in many studies.  It seems that we often forget that we're gathering evidence, not establishing universal truths.  The mere fact that the writer used the word "true" is sufficient cause for concern. 

    While I realize how "truth" is often used to indicate "facts", it shouldn't be confused with accuracy.  Truth is only relevant against deception.  Beyond that the use of such a term is only introducing another level of bias that permits the notion that belief systems are on an equal standing with it.
    Mundus vult decipi
    socrates
    Bravo, Michael. Nicely presented. I ran across this article by accident a few weeks ago and was appalled at what I read. The subtitle immediately raised suspicion with me. The text increasing incensed me. But it was the precisely the concluding statement that you quote that nearly caused me to jump to my feet (at the local supermarket Starbucks) and shout "Is There Something Wrong With the Journalistic Method?" :-)

    Seriously, there seems to be a cultural love/hate relationship with science. Admired on the one hand, but perhaps feared on the other. There does seem to be a desire among some to "keep science in it place", for fear, perhaps, that it might get too powerful. This journalist certainly seemed to being doing his misguide best to "cut science down to size".

    Thanks for setting the record straight (science does not tell us that all truths are equally valid!). Your presentation was a lot more convincing (and appropriate) than any spontaneous outburst of mine would have been in the middle of the supermarket.
    Citizen Philosopher / Science Tutor
    Nice post! Yeah, I have to agree this particular article seemed pointless and irritating. He tried to take bias (which is unavoidable), uncertainty (which is part of science), and the complexity of nature (of which everyone is already aware) and cobble them together into some ridiculous  postmodernist "science critique". It's not even clear to me what he means by science being "in decline" -- what, is he trying to argue that the theory of relativity or the theory of evolution somehow become less valid with time??? And what does he mean by "we have to choose what to believe"? Does he think that vitalism and young-earth creationism are equally valid "beliefs"? Pretty startling, coming from a science writer...

    miles
         Nice article Michael it reminded me when one of my students asked me if he can have his own method in solving the problem.  I was taken aback since I was not expecting that form one of my students. My answer was "sure  for as long as it can be duplicated".  I went home thinking that my answer was wrong because an erroneous method can also be duplicated.  The next meeting I had with my class I clarified my point. I told them that it is alright if the steps suggested by the textbook is not followed strictly for as long as in the process biases and errors are minimized and that at the end, the conclusions are pointed as answers to the problems  and are supported by evidences that can be retrieved, data from experiments that can be duplicated. 
           Indeed, a good conclusion may not be forever but is something that can withstand the test of time. Errors and  biases always have room in scientific study, for they may be there to challenge and to motivate researcher  to go on searching for, if possible, the ultimate truth.  The problem lies on how you can use, isolate, and minimize these errors and biases to an advantage.
    Fred Phillips
    Thanks, Michael. Your article is timely and important. Three observations:


    1. Rejecting a null hypothesis (or not) at a 90% significance level simply means that if the experiment is repeated 100 times on new and independent samples, one would expect the same decision about 90 times out of the hundred. So, if a study is replicated two or three times (which is all the replications we can practically expect from independent researchers, except for truly high-profile things like cold fusion or cancer cures), an opposite conclusion in one of them should not be taken as discrediting the original study. In fact, we can estimate the probability of a contrary conclusion in n replications! It's only when the number of replications becomes large (and as I said, this almost never happens) and the contrary conclusion is arrived at much more than 10% of the time, that we can call the original study crap. And of course it's elementary that none of this constitutes ”proof.“ The problem is exacerbated by medical announcements made on the basis of ridiculously small samples, e.g., 50 patients - and sadly, this is common.


    2. In management research (my field), observational studies and natural and quasi-experiments are the norm. Rarely (except, say, in large-scale Google advertising experiments) do we enjoy controlled laboratory conditions. If we explain 40% of the variance in a data set, we've succeeded! And it's enough to make someone some money, or spur similar studies that might lead to public policy changes. Replication is basically out of the question, because the uncontrolled variables will vary between one study and the next. The true test of a research result is use in management practice - similar to your “it works” justification for hard sciences. (Senior managers have pretty good intuitions, and have often migrated close to optimal practice anyway, i.e., the practice the researcher was going to recommend. It can be embarrassing. Then again, sometimes we can tell them something new.)


    3. In this age when one cannot trace who funds a political action committee, and these committees are very PR-savvy, and small-government ideologues are targeting science funding (with visible results as budgets are cut), we should have our political antennae out when we see an article like Leher's. Who paid whom to place such an article in a major newspaper? (And yes, I do largely still trust the New Yorker, but I'm still reeling after learning that the Washington Post, which owns the for-profit Kaplan University that has been investigated for student financial aid fraud, never wrote about the investigation. Journalists are not angels, unfortunately. We don't have to be paranoid, but we should be, just as you are suggesting, critical.)


    Vladimir Kalitvianski
    One of brightest examples of decline in the Theoretical Particle Physics is the "canonization of renormalizations" - sometimes lucky but mathematically always wrong prescription invoked to "repair" bad (unphysical) calculation results. It is practically impossible to get attention to an alternative approach of doing physics because, despite evident flaws of the current situation, any reasonable critics is just considered as heresy.


    Lehrer: "We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe."

    Lehrer seems to have done little science himself, nor read what scientists have said about their work and discipline. Of course all true ideas can't be proven. Lehrer must never have heard of Kurt Gödel. What has all this to do with science? What does Lehrer propose to replace science with? He is like a fundamentalist who demands 100 per cent gold-plated Truth. Penicillin, gene therapy, quantum computing, bacteria with arsenic in their DNA, the discovery of over 400 exoplanets are all the result of the scientific method. Apparently Lehrer is hard to impress and easily bored.

    Vladimir Kalitvianski
    Again, if we look at the fundamental theoretical physics, there have appeared a lot of hypes, and many serious researchers are searching in nature their "predictions". In other words, the theory patterns influence interpretation of experimental data. It is very dangerous (prejudice).
    science works because it's self correcting and regularly re-evaluated

    it's difficult to correct for the observed being tainted by the observer, but like you say, it's human and we learn more from mistakes than successes

    if there's ever a better system, the scientific method will fade out willingly replaced by something better.

    which really sets it apart from woo "medical treatments" and religions that cling to traditions, magical thinking and hang on well past their usefulness

    kerrjac
    Interesting post.
    Tangentially, I've been absorbed in 20th century history after starting a book, A History of Modern Times, by Paul Johnson. It's amazing to see how ideas took certain leaders and intelligentsia by storm simply because they sounded good. Lenin called his brand of marxism scientific, and therefore was bound to succeed, especially with the power of electricity on its side. Germany lost World War I not because of lack of military prowess, but because their culture and race was being infiltrated. Before World War I, young intellectuals waxed poetic about nationalism, because it felt right; and after they waxed poetic about pacifism, because it felt right - it was as if any intellectual idea about culture passed muster if it sounded good and struck a cord with the times.

    To a degree, that article strikes the same tone - throwing out assertions which merely sound good, but don't hold up to much logical scrutiny.
    Gerhard Adam
    Germany lost World War I not because of lack of military prowess, but because their culture and race was being infiltrated. Before World War I, young intellectuals waxed poetic about nationalism, because it felt right; and after they waxed poetic about pacifism, because it felt right - it was as if any intellectual idea about culture passed muster if it sounded good and struck a cord with the times.
    I'm not sure what that means, because once again, it simply suggests that people were being sold a "bill of goods" by people that (by definition) didn't actually have any power to set policy.  There's no question that ideas can gain popularity or lose it based on the circumstances a population currently finds itself in (hence why Germany embraced nationalism after Versailles).  However, I don't see how this actually tells us anything new.

    How is this different from people being angry after 9/11 and reacting in a haphazard way which has us embroiled in two wars?   Viet Nam wasn't about pacifism as much as it was about the draft, which is why there aren't comparable types of anti-war activities today.

    On the other hand, some ideas represent a fundamental shift in social perspectives (i.e. women's vote, civil rights, etc.).  The fact that some ideas work out and others don't isn't sufficient as an indictment of the idea, but perhaps its application or circumstances.

    Overall, I'm still hung up on terms like "intellectual" and why this seems to carry so much weight in your view.  In truth, if you examine this country, that's not the word that comes to mind when I look at the foolishness that persists in public polls and government policy.
    Mundus vult decipi
    kerrjac
    How is this different from people being angry after 9/11 and reacting in a haphazard way which has us embroiled in two wars?
    Good point! -maybe I have to think about it some more.




    Thanks for a thoughtful post Josh! Lehrer asked a provocative question: "Is there something wrong with the scientific method?" The scientific method is not in decline, but instead a few scientists allow desired outcomes to influence their interpretation of results. Irving Langmuir spent a lot of time thinking about this tendency, and in a 1953 talk he defined "pathological science." It is well worth reading:

    http://www.cs.princeton.edu/~ken/Langmuir/langmuir.htm

    Symptoms of Pathological Science:

    The maximum effect that is observed is produced by a causative agent of barely detectable intensity, and the magnitude of the effect is substantially independent of the intensity of the cause.

    The effect is of a magnitude that remains close to the limit of detectability; or, many measurements are necessary because of the very low statistical significance of the results.

    Claims of great accuracy.

    Fantastic theories contrary to experience.

    Criticisms are met by ad hoc excuses thought up on the spur of the moment.
    Ratio of supporters to critics rises up to somewhere near 50% and then falls gradually to oblivion

    adaptivecomplexity
    This talk seems vaguely familiar, and it looks like something I should revisit. Thanks for the excellent link.
    Mike
    Hm. I read the New Yorker article as saying:
    1) the Scientific Method is great.
    2) But humans have unconscious bias.
    3) Our industry of science, from how we incentivize scientists in universities and journals, perpetuates rather than corrects (2).
    4) Without correcting for (2) early on, errors in science result in bad conventional wisdom and practical applications that take years or decades to undo.

    This article's response seems to say (1) and (2) are great (duh - the scientific method is amazing), but doesn't acknowledge (3), which is really the only thing we can change. The rest are forces of nature.

    All these platitudes about how science works, how journalists don't get science, and the challenges dealing with uncertainty are really not acknowledging or facing the difficult truth being posed in this article: that *the incentives in science need some fixing*. Lets talk about these:
    - Journals are not doing a great job and have never truly embraced the age of the internet (which was, ironically, designed to share such information).
    - Universities are rewarding short term science (flashy findings) rather than long term science.
    - Data sharing among scientists is absolutely broken, because the reputation system is broken and everyone wants to protect their IP.
    - Too much chasing after new science is being performed, rather than verifying existing hypotheses.
    - etc

    All scientists chiming in here know these truths - so take the criticism in stride, and bring imagination to solve these issues.

    I see this as very similar to the financial crisis: a subprime asset w/ high credit rating = some finding that hasn't been properly verified but still carries the reputation of 'scientific validity'. What is worrisome is that if everyone kicks the can down the road on replicability & validity, there may be some point that a lot of the supposed 'truth' is unraveled in the public's eye, leading to a lack of confidence in science itself. Which would be really really bad for everyone's future.

    socrates
    "Hm. I read the New Yorker article as saying:
    1) the Scientific Method is great."

    Gosh, Nick, I don't know how you can get that from the article. Just read the subtitle of the article: "Is there something wrong with the scientific method?". When a journalist puts a question in his headline, he is not just asking a question. He is making a statement. He is saying that the question is in sufficient doubt as to warrant writing an article about it. That is not the same thing as saying "the scientific method is great". The subtext of the article is definitely "Maybe we can't trust science as much as we thought we could". The article's concluding sentence affirms that view.

    A more appropriate headline would have been "Is there something wrong with how science is reported by the media, interpreted by politicians and bureaucrats, and applied by industry?" The only ones fooled by bias are the ones supplying the bias - those who are reading more into the science than the science is telling us. The problem is not with science. Implying science can't be trusted (as opposed to "what you are being told about science can't be trusted") is the destructive meme that we really need to worry about and that, as you say, may "take years or decades to undo."
    Citizen Philosopher / Science Tutor
    Putting a question in a headline is actually not making a statement, it's being provocative to evoke a response (result: this blog post and all comments). This is really standard fare that any journalist would agree is best practice. Accuracy of headlines isn't what journalism is about, it's uncovering and sharing larger truths. This article was actually really good at that.

    The article itself never really questions the scientific method, it just questions the practices of the industry of science such as publication bias, the introduction of human bias in selective reporting, and the effects these have on everyone.

    I hate to be pedantic here, but once again you're not separating apart 'science' into a) the scientific method, and b) the industry of science. Is tenure 'science'? Are journal articles 'science'? Not really, they're part of what has been built around the scientific method (just like 'education' is not 'learning'). And as such they can be improved, and when flaws are found in them, its important to acknowledge and correct them.

    Arguing that it's the fault of individuals ("The only ones fooled by bias are the ones supplying the bias") rather than a systemic issue is not how changes get made. These are issues that need to be dealt with at a systemic level - building better incentive structures for researchers to prevent things like 'significance chasing', making sure journals apply more rigor and reduce bias toward trend-countertrend cycles, making sure null data is not a second class citizen, etc.

    I think we're really on the same page about stuff, that implying science can't be trusted IS a destructive meme - profoundly destructive - which is why I compared it to the effects of the recession and the unraveling of faith in our institutions. But the article does provide many examples of when the industry of science failed to correct for how we apply the scientific method. I think the best way to avoid the perpetuation of this meme is to argue less about how journalists don't 'get science' and instead address the issues the article IS raising, and then solve them.

    Fred Phillips
    I really hate the ideas that (i) our sun will go eventually go nova and cook its planets, and (ii) in the farther future the universe will suffer thermodynamic death. I wish they weren't true. If I were an astronomer or cosmologist, I'd have a serious investigator bias problem.

    So, Nick, I admire your tight and erudite reasoning - and agree with it - but I question your action recommendation. I cannot imagine an incentive that would engender investigator indifference about the extinction of all life. To offer or accept such an incentive would be pathological.

    OK, it's an extreme example, but extreme examples are useful for clarifying terms of a debate. In my example, the long-term implication is that we and all our successor species are dead. That can cause thoughtful people to seriously question the worth of their current actions or their contemplated actions. And people who are not thoughtful generally don't become scientists.

    Yes, there's a difference between non-indifference about an outcome, and objective rigor in investigating the outcome. It's tough for most people to maintain that difference. That's why we have research teams and peer review, so that others can cross-check our methods and results. The only incentive for peer-reviewing others' work is the expectation that others will donate their time to review my work. Of course there have been whole other threads on Science2.0 about the efficacy of peer review.