Are Pesticides Linked To Autism? Here Are 3 Big Concerns About A New Paper
    By Hank Campbell | June 23rd 2014 04:48 PM | 11 comments | Print | E-mail | Track Comments
    About Hank

    I'm the founder of Science 2.0®.

    A wise man once said Darwin had the greatest idea anyone ever had. Others may prefer Newton or Archimedes...

    View Hank's Profile
    If you haven't yet read that mothers who lived near farms have more kids with autism, you will. The reason, it is said, is because farms use pesticides. You're not off the hook, organic farmers. The results are from California and there are lots and lots of organic pesticides in use in the study area.

    The first question should be, how could such a thing happen? I don't ingest pesticides, and neither should you. People don't ordinarily do that, you'd have to really try to do that, unless you live out in the fields. In its coverage of this story, Reuters finds someone to speculate "that the pesticides probably drifted from crops through the air, and that’s how pregnant women were exposed."

    That someone would be Dr. Philip Landrigan, epidemiologist and pediatrician at Mount Sinai in New York, who is the go-to guy for this sort of thing. Everyone from the New York Times to the US Senate Committee on Environment and Public Works has quoted him, implicating pesticides in everything from ADHD to cancer to autism.

    But that is not really how pesticide application works. We can forgive a New York City pediatrician if they have never visited a farm, or for not knowing that people also use pesticides in and around their homes, sometimes four or more times per year. Speculation aside, the public will be concerned about toxicological harm to kids because there is a template for concern - 50 years ago thalidomide, the nausea drug for pregnant women, clearly caused harm in babies. Every environmental fund-raiser compares their scary chemical of the week to it even today.

    But the authors of the paper in Environmental Health Perspectives don't even know if any of the mothers actually breathed in pesticides or were exposed to them in any way. All they know is that mothers of kids with an autism spectrum diagnosis in the agricultural Sacramento region were more likely to live near a farm that used pesticides. That's common in this region, I live here, and I am within 2 miles of a farm even though I am in a city of 50,000 people. Yet their results counter-intuitively found that living closer to pesticide-using farms was better. The Odds Ratios for people who lived closer (1.25 KM) to farms were weaker than for people who lived farther away (1.5 KM). Weak associations that make no sense often mean it is coincidence that everyone but the authors can see. 

    So given that first weakness, why did UC Davis let a graduate student imply the science was settled? In their press release, lead author Janie Shelton declares, "This study validates the results of earlier research that has reported associations between having a child with autism and prenatal exposure to agricultural chemicals in California." 

    That's a bold claim and so it requires bold evidence. Let's get to it!

    Agriculture in California is a $38 billion industry so almost everything is known about it, including pesticide use. Farming as logistics, technology and science is exhaustively examined on a constant basis because the margins are thin. The Childhood Autism Risks from Genes and Environment (CHARGE) study at U.C. Davis was created to look for causes of autism, so it has followed 1,600 kids with Autism Spectrum Disorder since 2003. Since the participants are in one California location, and pesticide data is also available, they matched younger kids in the study to a pesticide map.

    Result: more autism among kids close to farms. So much for healthy living outdoors, right?  But there is a problem - they didn't actually measure anything, they used a very small sample size and a proxy. We all know what a proxy is; it is an agent for another thing. In epidemiology, though, its use can be tricky. That discipline often uses interviews or surveys, like this one did, which are not all that reliable in small samples. Using a proxy based on questionnaires to connect autism and pesticides via farms is a bit of a red flag unless things are well controlled. Using a logistic regression to try and accurately control for confounding factors can be tough for scientists, in the hands of people with far less statistical expertise, it is wrong more often than right. Statistics experts by now assume the worst.
    Proxies clearly have value. If someone is dead or unavailable, a proxy must be used. But they can be misused. To put that into context, a proxy was also used to recently show:

    (1) Zebras evolved stripes due to flies
    (2) Famous paintings are proof of climate change
    (3) Bicep size correlates to political conservatism.
    (4) A lack of vitamins is linked to more autism

    That fourth one is interesting because it also used The Childhood Autism Risks from Genes and Environment (CHARGE) study. Retrospective studies, like finding people with autism and then finding statistical matches to other stuff, is what a lot of what epidemiologists do. Unfortunately, that's not good science.

    Using a demographic method like that, we can find a real nexus for autism, a place that is far worse than farms: It is being in wealthy Los Angeles neighborhoods with lots of pediatricians nearby. That is such an autism causer that kids who moved there and had no diagnosis before suddenly caught it.

    That's not to trivialize autism or pesticides, I mention it in order to not let science be trivialized.  I grew up on a farm, I respect pesticides in a way the anti-science fundamentalists at Pesticide Action Network can't. And public health is serious business. But because it is serious business, we want to make sure that the public outrage machine is only geared up when it is warranted. When thalidomide was found to be a problem, getting action was easy, but today those kinds of claims are launched once a week and the public quickly becomes jaded. That means they won't know real science from environmental bombast.

    Landrigan, quoted in the Reuters article, says the study's two biggest weaknesses, that they used a proxy and didn't actually measure anyone's exposure, lead him to counter-intuitively believe their findings are too conservative, so look for his reliable anti-pesticide stance to get him profiled in The New Yorker some time soon. But rational people can't take that seriously.

    Credit: DOI:10.1289/ehp.1307044

    I wrote in the title this has 3 weaknesses. In summation, they are:

    (1) They didn't measure any pesticides in mothers or their homes.

    (2) The data is inconsistent. Their odds ratios were weak, and that means there is a strong likelihood they are overstating the power of chance events.

    (3) The sample size is too small. 468, in this case.

    The paper itself concedes it is only "exploratory" so why does the lead author declare "This study validates the results"? That brings us, by proxy(!), to a fourth weakness.

    The principal investigator of the study is Irva Hertz-Picciotto, a professor and vice chair of the Department of Public Health Sciences at UC Davis. She also happens to be on the Advisory Board of Autism Speaks and on the board of an anti-chemical advocacy group Healthy Child, Healthy World. That doesn't invalidate the work, obviously, but by proxy, when a person conducting a study that links two advocacy causes together fails to disclose their high-level involvement in advocacy groups for those causes, it has to be considered that there is something happening that is not the impartial science that the public expects of the science community.

    What would environmentalists say if a study claiming pesticides boosted IQ happened to be done by someone on the advisory board of a pro-pesticide advocacy group?

    The study would still have to be taken on its merits, just like this one must be, but it's a good reason to use some skepticism about its motivations. Taken on its merits this doesn't tell us anything more about autism than that study claiming vitamin supplements would prevent it did.

    Citation: Janie F. Shelton, Estella M. Geraghty, Daniel J. Tancredi, Lora D. Delwiche, Rebecca J. Schmidt, Beate Ritz, Robin L. Hansen, Irva Hertz-Picciotto, 'Neurodevelopmental Disorders and Prenatal Residential Proximity to Agricultural Pesticides: The CHARGE Study', Environ Health Perspect; DOI:10.1289/ehp.1307044


    It seems like you are looking for problems in the study much harder than you are looking for potential risk factors for autism. In fact, the three arguments seem almost like you assuming the exact opposite of the study conclusions.

    The first issue was that pesticides were not observed in the bloodstream or in the house of any individuals. Considering how certain pesticides break down, this may be irrelevant to the study. After all, we already know the effects of direct organophosphate exposure. It has been studied once or twice, with predictable results. ( My understanding is that the link is positive, but not significant enough to justify banning an entire class of useful pesticides.

    The second issue you point out is that the data is not particularly consistent. Which is true, but is fairly typical of anything except massive studies. Whenever a study includes a sample size in the hundreds, results are going to be all over the place, simply because of the natural differences between individuals in any population.

    If you were to drop pesticides from a hundred separate airplanes, and then take samples on the ground an hour later, the pattern would not resemble a bullseye. If you take samples a year later, even less so. If you look for side-effects, rather than the pesticide itself, you will end up with a haphazard pattern, based on sensitivity. An individual with minimum exposure might show maximum symptoms, or vice-versa. Researchers have long known that the genetics of some people makes them more sturdy.

    The third issue you pointed to was the sample size. Although the small sample size casts doubts on the statistical significance of the results, it hardly invalidates them. It might indicate a need for more study, but it certainly doesn't reverse the conclusions.

    The more interesting result would be if we looked for other risk factors and adjusted for other known risk factors. For example, if we were to find that the couples living near the farms had delayed having kids until their late 30's, it would invalidate the results. Advanced maternal and paternal age significantly increases the risk of autism. On the other hand, if the researchers adjusted for such factors, then it makes the study results seem more accurate.

    If the sample size was only ten, then the results could be easily dismissed. Once the sample size gets up to a few hundred, statistical measures can be applied to determine the chance of a random outcome. In this case, the chance of non-representative sample appears to be 5%. I might be misreading it, but it seems like another argument against pesticide overuse.

    More to the point, I am not sure there are any good arguments for pesticide overuse. Pesticides should be properly controlled, properly applied, carefully regulated, and kept away from pregnant women whenever possible. If you don't believe the study, then you should at lest believe the label of the pesticide, which I believe contains the same warning.

    It seems like you are looking for problems in the study much harder than you are looking for potential risk factors for autism. In fact, the three arguments seem almost like you assuming the exact opposite of the study conclusions.
    That's what science is, the burden of proof has to fall on the evidence. The opposite of that is advocacy. 

    The sample size is a problem because their logistic regression led to an OR that can't be right. That means they are finding significance in something that is probably chance. Their press release is the problem because the majority of mainstream journalists, and all of the employees at anti-science groups, will take that misrepresentation and run with it. The authors did that on purpose also.
    You are allowed to defend the assumption of "no relationship", but the evidence seems to be pointing the opposite direction. Similar studies of other risk factors that show weak but persistent O.R. have been borne out by subsequent research. (The relationship between advanced parental age and autism was only 1.4, with a 95% confidence of 1.1 to 1.8.)

    The "bottom" of the CI for the 3rd trimester for organophosphates was pretty similar to the "advanced parental age" low estimate. The pesticide study is smaller, so the "top" is different, but that is how statistics work.

    If you were to re-evaluate the statistics, you might find a flaw in their methodology, but that is not the argument you made. The argument you made was that any sample size this small will turn up random numbers that can be force-fit into a regression analysis to come up with an OR of 2. I don't think that is true.

    The study appears to demonstrate that the 3rd trimester exposure risk for organophosphates was greater than the first trimester exposure risk, and greater than random chance. The difference between 1.75 miles and 1.25 miles appears to be random, but the difference between first trimester and third trimester does not. This fits well with what we know of pesticide exposure during pregnancy.

    More to the point, the CI for organophosphates seems to be sufficient to reject the null hypothesis. This fits well with similar research on organophosphate metabolites (

    You are suggesting that it is a form of misrepresentation to release the results of the study to the press. I disagree. I feel that the press bears some responsibility to report that the evidence against other pesticides was thin, but that does not invalidate the evidence itself.

    Statistical relationships can be weak for a variety of reasons. "Weak" is not the same as "false". More study is needed, obviously, but if I were expecting a child right now, I would think that a prudent course of action would be to avoid organophosphates for a few weeks. If you are suggesting that the precaution is unjustified, I think the research is sufficient to shift the burden of proof.

    You are suggesting that it is a form of misrepresentation to release the results of the study to the press. I disagree. I feel that the press bears some responsibility to report that the evidence against other pesticides was thin, but that does not invalidate the evidence itself.
    No, it is a form of misrepresentation to misrepresent the results in a press release. They are not the first, of course, there are over a thousand articles here where we criticize those kinds of tactics even while mainstream media bought into them. As I said, the paper and an outside scholar in Reuters acknowledge its limitations. The authors themselves are doing the advocacy - and perhaps the journal. This thing went from 

    On your last paragraph, I agree but it is a good thing for audience members unfamiliar with statistics to read. I'm not advocating ingesting pesticides, I am a guy who doesn't even like to take aspirin. Someone who makes his own butter in a mason jar because I don't want commercial butter can also appreciate the scientific value of pesticides - and want to defend science from an agenda.

    Clearly you do the same thing, though your read of their work is more generous. You believe they have shown that pesticides cause autism but no EPA reregistration is ever going to use this paper because the flaws are too numerous.
    I'm not sure your statistical arguments are convincing. The log odds 95% CI is 1.1 to 3.2, so only at the edge of the CI does the odds ratio become 1. Moreover the 468 sample size is already factored into the CI ratio - i.e. the study is big enough to give us the 95% ratio that goes from 1.1 to 3.2. The commenter above misses this point as well. Sample size sufficiency depends on odds ratio strength.

    In addition, when you point out that the 1.25 km drop in incidence compared to 1.5km is counterintuitive, I would argue that you shouldn't be looking at the finer binning precisely because that's where the data quality starts to suffer, when comparing small bins.

    However, I do agree with the proxy measurement problem as being one step removed from truth. How can we overcome this? It's probably true that pesticides break down in the liver so assaying them directly in blood is not right. Perhaps we can look for unique metabolic products in the blood stream. That would be a proxy still, but a much closer one to the putative causative agent.

    I'll place a capture of the table we're talking about here:

    Autism, developmental delay and referents differed on baseline characteristics so they did a logistic regression to control for confounding factors. Did they do it correctly? You may be better able to answer that. What they can't do is what they did anyway; declare in a press release that this was a solid result affirming their beliefs when they didn't actually measure anything.
    "Speculation aside, the public will be concerned about toxicological harm to kids because there is a template for concern - 50 years ago thalidomide, the nausea drug for pregnant women, clearly caused harm in babies. Every environmental fund-raiser compares their scary chemical of the week to it even today." Thalidomide is still used to treat several types of cancer, rheumatoid arthritis, and a number of other diseases. As long as you're not pregnant, it's still a useful drug.
    "This page intentionally left blank." --Gödel
    Our author writes almost as if the odds ratios are significantly lower for 1.25 compared 1.5 km, but they aren't, right? Similarly, UCDavis press release points out where the trend is in the more expected direction (assuming the effect is real), but that trend isn't significant either. The yellow card for both of you, for interpreting small non-significant differences as interesting. If the "true" ORs were the same, or slightly higher for 1.25, it'd be easy to get point estimates that are slightly lower for 1.25 by chance.
    Also, the time to complain about sample sizes is when a null hypothesis fails to be rejected, but not by much, and you think a bit more data would have rejected the null. Don't get me wrong - more data is still good and would have been welcome.
    PS: I'm not trying to advocate for or against the hypothesis. Might be some risk, might not.

    Oh, there is risk. It is silly to think ingesting pesticides in quantity is not harmful. The problem is this paper does not show anything about risk and the authors exaggerate what they did find.

    We can prove climate change just by using guys named Steve, for example, but that was done for humor. Epidemiologists forget this is only "exploratory" when they are writing their press releases.

    As I said in the article, we should be suspicious if someone at a pro-pesticide advocacy group creates a study specifically designed to correlate cause and benefit - good studies are prospective, not retrospective - and we have to be suspicious of claims by avowed anti-pesticide people the same way. Focusing on tiny percentages related to the proxy and ignoring the fact that nothing was actually measured is a distraction. There's no way this study could be used by the EPA or anyone else so trying to talk about the null hypothesis isn't meaningful. I can create a 5 sigma accurate answer that is completely wrong without even trying.
    Not good enough for EPA to issue rules, that's for sure.
    However, if we had found risks much higher as the distance from the fields got greater, I might try looking at factors other than pesticides. So I'm not ready to declare it worthless, just very cheap. John Snow (around 1850) helped kill the miasma theory of diseases by putting little dots on maps of London where people had suffered cholera, and seeing they clustered around certain water sources - he didn't at first ask which water source each person actually used (I think, hope I'm not rewriting history). The idiot elders thought it enough evidence to remove some pump handles - those fools.
    Lots of epi studies are cheap, cause who wants to spend serious money on prospective studies and measuring particular factors until there's a bit of evidence that the factor may be a problem - we don't want to pay to measure every possible thing.

    Well, on the bright side, maybe this will persuade the silly folk to vaccinate their children, so we can get back to the task of eradicating perfectly curable dangerous diseases. On the downside, this will push more of the silly people to seek to ban pesticides and possibly cause a drop in food production... But what could go wrong with that? We'll all be "healthier."