Lesson 1. Doing something is better than doing nothing.

"You should go to the studio everyday," a University of Michigan art professor named Richard Sears told his students. "There's no guarantee that you'll make something good -- but if you don't go, you're guaranteed to make nothing." The same is true of science. Every research plan has flaws, often big, obvious ones -- but if you don't do anything, you won't learn anything.

I have been asked to write six columns for the journal Nutrition about common scientific mistakes. The mistakes I see are mostly mistakes of omission.

A few years ago I visited a pediatrician in Stockholm. She was interested in the connection between sunlight and illness (children are much healthier in the summer) and had been considering doing a simple correlational study. When she told her colleagues about it, they said: Your study doesn't control for X. You should a more difficult study. It was awful advice. In the end, she did nothing.

Science is all about learning from experience. It is a kind of fancy trial and error. But this modest description is not enough for some scientists, who create rules about proper behavior. Rule 1. You must do X (e.g., double-blind placebo-controlled experiments). Rule 2. You must not do Y (e.g., "uncontrolled" experiments).

Such ritualistic thinking is common in scientific discussions, hurting not only the discussants -- it makes them dismissive -- but also those they might help. Sure, some experimental designs are better than others. It's the overstatement, the notion that experiments in a certain group are not worth doing, that is the problem. It is likely that the forbidden experiments, whatever their flaws, are better than nothing. One group that has suffered from this way of thinking is persons with bipolar disorder. Over the last thirty years, few new treatments for this problem have been developed. According to two researchers in the area , "many of us in the academic community have inadvertently participated in the limitation of a generation of research on bipolar illness . . . by demands for methodological purity or study comprehensiveness that can rarely be achieved" [1, p. 71]. A variation of this problem is to ignore evidence that does not meet some level of rigor. An example is a recent panel report [2] on the value of multivitamin/mineral supplements. “We strongly criticize the panel’s decision,” wrote Ames, McCann, Stampfer, and Willett [3], “to base policy recommendations only on evidence from RCTs [randomized clinical trials] . . . The panel proudly points to the fact that, even though folate was well known to decrease the risk of neural tube defects in animal studies, policy recommendations for folate supplementation to prevent neural tube defects were delayed while authorities waited some years for confirmation from RCTs. One can only wonder how many infants were born with neural tube defects while authorities waited.”

Rituals have right and wrong. Science is – or should be -- more practical. The statistician John Tukey wrote about ritualistic thinking among psychologists in an article called "Analyzing data: Sanctification or detective work?" [4]. One of his examples involved measurement typology. The philosopher of science N. R. Campbell had come up with the notion, popularized by Stevens [5], that scales of measurement could be divided into four types: ratio, interval, ordinal, and nominal. Weight and age are ratio scales, for example; rating how hungry you are is an ordinal measure. The problem, said Tukey, were the accompanying prohibitions. Campbell said you can add two measurements (e.g., two heights) only if the scale is ratio or interval; if you are dealing with ordinal or nominal measures, you cannot. The effect of such prohibitions, said Tukey, is to make it less likely that you will learn something you could have learned. (For more about the trouble with this typology, see [5].)

I fell victim to right-and-wrong thinking as a graduate student. I had started to use a new way to study timing and had collected data from ten rats. I plotted the data from each rat separately and looked at the ten graphs. I did not plot the average of the rats because I had read an article about how, with data like mine, averages can be misleading -- they can show something not in any of the data being averaged. For example, if you average bimodal distributions you may get a unimodal distribution and vice-versa. After several months, however, I averaged my data anyway; I can't remember why. Looking at the average, I immediately noticed a feature of the data (symmetry) that I hadn't noticed when looking at each rat separately. The symmetry was important [6].

A corollary is this: If someone (else) did something, they probably learned something. And you can probably learn something from what they did. For a few years, I attended a meeting called Animal Behavior Lunch where we discussed new animal behavior articles. All of the meetings consisted of graduate students talking at great length about the flaws of that week's paper. The professors in attendance knew better but somehow we did not manage to teach this. The students seemed to have a very strong bias to criticize. Perhaps they had been told that "critical thinking" is good. They may have never been told that appreciation should come first. I suspect failure to teach graduate students to see clearly the virtues of flawed research is the beginning of the problem I discuss here: Mature researchers who don't do this or that because they have been told not to do it (it has obvious flaws) and as a result do nothing. Leonardo da Vinci put it nicely: “If they are unable to see what is divine in Nature, which is all around them, how will they be able to see their own divinity, which is sometimes hidden?”

 

References

1. Post RM, Luckenbaugh DA.. Unique design issues in clinical trials of patients with bipolar affective disorder. J Psychiatr Res 2003; 37:61-73.

2. Multivitamin/mineral supplements and chronic disease prevention. Am J Clin Nutr 2007;85(suppl):254S–327S.

3. Ames BN, McCann JC, Stampfer MJ, Willett WC. Evidence-based decision making on micronutrients and chronic disease: long-term randomized controlled trials are not enough. Am J Clin Nutr 2007;86:522-3.

4. Tukey, JW. Analyzing data: Sanctification or detective work? Amer Psychol 1969; 24:83-91.

5. Stevens, SS. On the theory of scales of measurement. Science 1946; 103:677-680.

6. Velleman PF, Wilkinson L. Nominal, ordinal, interval, and ratio typologies Are misleading. Amer Statistician 1993; 47:65-72.

7. Roberts S. Isolation of an internal clock. J Exp Psychol: Anim Behav Proc 1981;7:242-268.