Could exposure to glyphosate -- an herbicide often paired with genetically engineered corn, soybeans, cotton, and other crops – be causing cancer? That question has become the central contention advanced by critics of agricultural biotechnology.
Multiple studies and assessments from regulatory agencies around the world have concluded that glyphosate poses no significant health risks, either to the public from trace residues in our food or to workers exposed through farm work or manufacturing.
However, the new analysis -- what’s called a “meta-analysis” because it crunches data from multiple studies -- is raising a cautionary flag about the consensus that glyphosate is safe.
What are we to make of it? Instead of being a neutral examination, the authors combine the results from several different studies, which are in no way comparable, selecting data that will produce the desired result. They then highlight that result in order to attract attention.
A 41% increased cancer risk?
The analysis, which appeared in the journal Mutation Research, purports to show that human exposure to glyphosate increases the risk of non-Hodgkin’s lymphoma, or NHL. The paper analyzes no new data but instead combines results from a number of different studies. The first author is Luoping Zhang of the University of California-Berkeley so I will refer to the study as the Zhang paper.
The final print version of the paper is not yet published. Nevertheless, the available preprint has already attracted media attention and stirred much Internet debate. An article citing the study appeared in The Guardian on February 14 under the headline: “Exposure to weed killing products increases risk of cancer by 41% – Evidence ‘supports link’ between exposures to glyphosate herbicides and increased risk for non-Hodgkin lymphoma.”(1)
Before examining the actual Mutation Research paper, some background is essential.
The question whether the world’s most widely used weed killer is causing cancer has been the focus of intense controversy since March, 2015, when the International Agency for Research on Cancer (IARC) issued a report classifying glyphosate as a “probable carcinogen.” It needs to be understood that IARC doesn’t evaluate risk in the way other health agencies do, taking into account actual exposure to a substance or agent in the real world, in terms of intensity and duration of the exposure. Rather, IARC chooses to evaluate “hazard” – that is, whether a substance or agent could possibly cause cancer under some conditions, no matter how far removed from everyday experience. Under this much-relaxed standard, of the more than 500 agents that have been classified by IARC with respect to carcinogenicity, only one was judged by the Agency to be “probably not carcinogenic.”
IARC’s conclusion regarding glyphosate conflicts with the assessment of every other health or regulatory agency that has reviewed the safety of the chemical. That includes the Environmental Protection Agency, European Food Safety Authority, Food and Agriculture Organization in a joint study with the World Health Organization, European Chemicals Agency, Health Canada, German Federal Institute for Risk Assessment, and others.
These agencies have all concluded that, at the levels to which farmers and the general population are exposed, glyphosate does not pose a cancer risk. Following the release of the IARC report in 2015, a major epidemiological study was published based on data collected by the Agricultural Health Study. The AHS gathered data on 54,000 pesticide applicators, including 45,000 who had handled glyphosate, beginning in the mid-nineties. The authors of the paperAndreotti et al., “Glyphosate Use and Cancer Incidence in the Agricultural Health Study,” 2018) concluded:
In this large, prospective cohort study, no association was apparent between glyphosate and any solid tumors or lymphoid malignancies overall, including NHL and its subtypes. There was some evidence of increased risk of AML among the highest exposed group that requires confirmation.
It should be noted that IARC’s conclusion was based on animal evidence (studies conducted in rats and mice), rather than on human, epidemiologic evidence, which the IARC considered to be “limited.” However, IARC has been criticized for selecting the few “positive” results from rodent studies that seemed to show an increased tumor yield in exposed animals, while ignoring exculpatory results that showed decreasing tumor yield in exposed animals.(2)
Despite the many questions surrounding the IARC glyphosate report, its conclusion has caused widespread concern in the public and has been taken up by environmental activists. In addition, there are currently roughly 9,300 pending lawsuits in U.S. courts brought by plaintiffs who claim that their cancer was caused by exposure to Roundup, which contains glyphosate as its active ingredient.
In the first case to go to trial, a California school groundskeeper, Dwayne Johnson, sued Monsanto (the manufacturer of Roundup), claiming that his terminal NHL was caused by his exposure to Roundup in the course of his work. Last August, he was awarded $39 million in compensatory damages and $250 million in punitive damages. The punitive damages were later reduced to $39 million. The jurors claimed in interviews that they were heavily influence by the IARC hazard designation.
We can now turn to the Mutation Research paper.
The background summarized above is relevant, because Zhang et al. refer to “considerable controversy” surrounding the question of the carcinogenicity of glyphosate, but they describe a situation in which opinion is evenly divided, rather than acknowledging the overwhelming consensus among health agencies regarding the safety of glyphosate or making any mention of the questions pertaining to IARC’s assessment.
Drilling down into the meta-analysis
At the outset, we should note that this is a very long paper – three or four times the length of the average paper in epidemiology. The unusual length is due, in large part, to the fact that the authors include many secondary analyses by which they attempt to bolster the case they are making. However, the secondary analyses serve to obscure much more important issues, which the authors avoid addressing.
The bulk of the paper is devoted to a meta-analysis of the small number of epidemiologic studies that examined the association of glyphosate exposure and risk of developing NHL.
In the main analysis presented in the paper, Zhang et al. combine the results from the large Agricultural Health Study cohort with the results of five case-control studies. The result was a summary relative risk of 1.41 (95% confidence interval 1.13-1.75). This means that, compared to those who were not exposed to glyphosate, those exposed to the compound had a 41 percent higher likelihood of developing NHL. For reference, this is a very modest increase in risk, and NHL is a rare disease. In the U.S. roughly 20 new cases are diagnosed per 100,000 men and women each year. If the 41 percent figure were real, that would mean that 8 additional new NHL cases would be expected each year for every 100,000 exposed to glyphosate. But the 41 percent figure, as we will see below, is almost certainly too high, based on the best human evidence. Yet, the authors highlighted this highly questionable number in the abstract, knowing that it would be picked by journalists and activists and instill fear in the public.
Meta-analysis is a statistical technique used to combine a number of relatively small studies in order to obtain a more stable, and therefore more credible estimate of an association. A meta-analysis produces a summary relative risk (RR), which is a weighted average of the RRs from the individual studies.
The cardinal requirement for conducting a valid meta-analysis is that the individual studies are similar enough in their methods, study design, and data quality to justify combining them to obtain an overall summary measure of risk. The results of a meta-analysis are only as good as the individual studies that go into it.
In their primary meta-analysis, Zhang et al. combine 6 studies. Five of these are case-control studies; one is a cohort study.
In a case-control study, the researcher identifies cases of the disease of interest (through hospitals, cancer registries, etc.) and selects a comparison group that is generally similar to the case group but is free of the disease under study. Cases and controls are then interviewed about their personal habits and past exposures. This method has the strength of enabling one to enroll large numbers of cases, even when a disease is rare, as in the case of NHL.
However, a major weakness of the case-control study design is that one is obtaining information about exposures of interest from cases, after they have already developed the disease. Cases may respond to questions about their exposures differently from controls. Specifically, cases may be more apt to ruminate about what caused their illness, and this may lead them to emphasize their exposures, whereas the controls do not have the same motivation. This is referred to as “recall bias” and can lead to a spurious association.
An additional problem, which is pertinent to the issue at hand, is that population-based case-control studies are not suitable for studying environmental or occupational exposures, due to the small percentage of people exposed to any particular agent.
Cohort studies start by enrolling a study population (a cohort) that can be assessed at the outset in terms of their health and exposure history and then followed for a number of years in order to identify new cases of disease that develop during follow-up. Cohort studies usually take more time and are more expensive to conduct than case-control studies. Furthermore, the cohort needs to be large enough and followed for an adequate duration in order to obtain enough cases of a rare disease to evaluate the association of interest. The principle advantage of a cohort study over a case-control study is that in the former, the researcher obtains information about the exposures of interest prior to the development of disease. Thus, recall bias is not an issue in cohort studies. An additional advantage is that cohorts can be made up of people working in a particular occupation, which increases the exposure prevalence to occupational exposures of potential interest (e.g., pesticides in a cohort of farmers).
Zhang et al. perform many subsidiary analyses to determine whether the estimate of a 41 percent increase in risk for those exposed to glyphosate stands up under different assumptions. But much of their lengthy discussion is beside the point and serves only to distract the reader from what is the key question regarding their analysis: Are the different studies sufficiently comparable in the quality of their data and the calculation of risk to justify combining them?
A look at Table 4 of the Zhang paper, which reports the results of the individual studies combined in the meta-analysis, helps answer this question. Detailed exposure information was available in the AHS enabling the researchers to classify the study population of ~54,000 pesticide applicators into quartiles of exposure. The risk estimate selected by Zhang et al. (from among many results in the 2018 paper by Andreotti et al.) for farmers in the quartile with the highest exposure in AHS, compared to farmers unexposed to glyphosate, is 1.12 (95% CI 0.83-1.51), indicating no increased risk for those with the highest cumulative exposure. (We will return to the crucial choice of this estimate below.) Owing to the large size of the AHS, the confidence limits are fairly narrow. In fact, 440 of the 575 NHL cases in the AHS study were exposed to glyphosate.
If we look at the case-control studies, the risk estimates for 4 of the 5 studies were elevated, ranging from 1.85 to 2.36, while the remaining study showed no elevation in risk. The confidence intervals are much broader, reflecting both the smaller size of the case-control studies, and the smaller number of cases who are exposed to glyphosate. The number of cases in all of the case-control studies is only 136 out of a total of 2,836 NHL cases. As mentioned earlier, a key point that is not well understood, even by some epidemiologists, is that population-based case-control studies of occupational exposures have relatively small – often, very small -- numbers of cases and controls who are exposed to the agent of interest. This not only means lower statistical power to detect an effect, but it also tends to produce estimates that are highly unstable (that is, small changes in how one categorizes exposure can result in large differences estimates).
A related point that is clear from Table 4 is that “exposure” does not mean the same thing in the different studies. Whereas, in the AHS the highest quartile of exposure is contrasted with “no glyphosate exposure” to estimate risk, in three of the case-control studies, the exposure contrast is simply “ever” vs. “never exposed to glyphosate.” In two other case-control studies, the definition of exposure is “greater than 10 days/year” (vs. “no exposure to any pesticide”) and “greater than 2 days/year” (vs. “no exposure to glyphosate”). Thus, the exposure classification in the case-control studies is much cruder than in the AHS, and one would not expect such crude, dichotomous, comparisons to show a higher risk than the sharper contrast used in the AHS between the highest and lowest exposure groups.
The AHS examined glyphosate exposure in relation to the risk of 20 different cancers, including different types of lymphohematopoietic cancers, including NHL. In their analysis, the researchers adjusted for exposure to other pesticides, as well as for important confounding factors, such as smoking and body weight. In contrast, the case-control studies focus on a single type of cancer, NHL, which actually has different subtypes. And they were unable to adjust for many confounding factors.
Owing to the size of the AHS and the large number of cohort members who were exposed to glyphosate, and the long follow-up of the cohort, this study provides much finer-grained information about the health effects of exposure than the case-control studies with the weaknesses described above.
Given the differences in data quality and methods between the case-control studies and the cohort study, it is highly questionable to combine them. The authors devote a lot of space to discussing potential weaknesses of the AHS to explain why it might have failed to detect a positive association with glyphosate exposure. Much of this discussion is beside the point. They devote much less space to describing the real deficiencies of the case-control studies.
One further point needs to be made. When conducting a meta-analysis, one is often faced with the choice of which risk estimate to use from a given study, which may present a number of different risk estimates. The updated analysis of the AHS by Andreotti et al. (2018) presented a large number of risk estimates resulting from different analyses of lymphohematopoietic malignancies, including NHL. These include results for 5-year, 10-year, 15-year, and 20-year lag periods. Zhang et al. chose to use the 20-year lag result for inclusion in the meta-analysis (RR = 1.12, 95% CI 0.83-1.51). In fact, the unlagged, and the 5-year, 10-year, and 15-year lagged RRs for the highest quartile are all below 1.00 (0.87, 0.87, 0.83, and 0.94, respectively). There is no particular justification for picking the 20-year lagged result, as Zhang et al. do. They could just as reasonably have picked the 10-year lag analysis, which gave RR = 0.83 (95% CI 0.62-1.10). But it is interesting to note that the 20-year lagged RR was the largest of five risk estimates presented in the paper and the only one above 1.00. If Zhang et al. had picked the 10-year lagged RR for inclusion in the meta-analysis, the overall result would likely not have been statistically significant, since, even with the selection of the largest RR, the lower confidence limit of the summary RR is barely above the threshold for statistical significance (lower bound = 1.13). (The data from the AHS account for more than 50% of the total data in the meta-analysis, so using a RR below 1.0 would exert a strong downward pull on the summary RR).
Let’s recapitulate the key points:
1) Zhang et al. set out to combine the results of studies of drastically different quality. Yet they never question the appropriateness of conducting a meta-analysis, which, in this case, is the weighted average of one high-quality cohort study with five case-control studies of much poorer quality.
2) Confronted with the choice of which risk estimate to select from the AHS, the researchers chose the highest RR of 5 reported in Andreotti et al. (2018), thus, ensuring that the resulting summary RR would reach statistical significance.
3) In order to give their paper the appearance of academic rigor, the authors conducted a huge number of secondary analyses, varying different conditions, to convince us that the 41 percent increase in risk is a solid result that is not affected by varying different aspects of their analysis. But these “sensitivity analyses” and subtle statistical considerations are presented instead of addressing more basic issues that determined the results of the meta-analysis. For example, if the authors were truly interested in the validity of their meta-analysis, they would have acknowledged the weakness of the case-control studies. Furthermore, they would have presented an analysis showing the effect of using each of the 5 different risk estimates reported in the AHS study, not just the highest one. Such an analysis would likely have shown that using most of the RRs reported in the AHS in the meta-analysis yielded a result that was not statistically significant. Of course, this would have been much less newsworthy and would have made their paper much less likely to be published.
4) The authors highlighted the questionable 41 percent result, which they knew would grab headlines and inspire fear.
One can’t escape the impression that the motivation behind what presents itself as a disinterested academic study was to include a selected and unrepresentative result from the highly-respected AHS in their meta-analysis and use the far inferior case-control studies to jack up the summary relative risk to obtain a statistically-significant finding.
Apparently, the authors judged that few lay people, and likely few scientists, would notice the sleight of hand amidst the large number of secondary analyses and lengthy obfuscatory discussions.
One final observation. This paper underwent peer review, most likely with at least two outside reviewers as well as the editor(s) at the journal evaluating it. We must ask how such a misleading and tendentious paper could have passed the peer review process.(3)
(1) Carey Gillam is a former Reuters reporter, who now works as the ‘research director’ for US Right to Know, an organic industry-funded anti-biotechnology group that has been in the headlines for its repeated attacks against agricultural biotechnology, university scientists, and science communicators. She recently wrote a book claiming to document the alleged dangers of glyphosate. I’ve written about her before.
(2) There is also evidence, noted in an extensive investigation by Reuters, that IARC had initially concluded that the weight of evidence showed glyphosate posed no serious carcinogenic threat, but that conclusion was changed days before the report’s release. The Invited Specialist for the IARC panel that evaluated glyphosate, Christopher Portier, began working with law firms suing Monsanto less than two weeks after the IARC classification of glyphosate as a probable human carcinogen was announced.
(3) It is ironic that the authors cite the classic paper by John Ioannidis, “Why most published research findings are false” (2005), since, by making the cardinal errors pointed out above, they have produced a result that no amount of secondary analyses and statistical fine points can make up for.
Disclosure: I have no financial involvement with Monsanto/Bayer or any other conflict of interest relating to this topic.Geoffrey Kabat, Ph.D., is a cancer epidemiologist who has been on the faculty of the Albert Einstein College of Medicine and Stony Brook University Medical School and is the author of Hyping Health Risks: Environmental Hazards in Daily Life and the Science of Epidemiology and Getting Risk Right: Understanding the Science of Elusive Health Risks. Follow him at firstname.lastname@example.org. This article also appeared in Forbes.