Several contemporary events scheduled for the European Union are bringing increased focus to the topic of communicating risks about Endocrine Disrupting Chemicals (EDCs). The first took place on May 16 at the annual meeting of the European Chapter of the Society of Environmental Toxicology and Chemistry (SETAC) which was held in Rome. The second will be a moderated discussion to be held at the annual Helsinki Chemical Forum on June 14. These separate, but related, events and the continued discussions that may follow, make it timely to explore the topic of risk communication of EDCs in greater detail.

Do EDCs pose a unique challenge for risk communication?

No, although some would like us to think so. Virtually all environmental health threats pose the same set of challenges for risk communication. Endocrine disruption is simply one among many different known modes of action by which chemicals may potentially produce toxicity. They all are complex, multifactorial, only partially understood, involve high degrees of scientific uncertainty and potentially may lead to chronic health effects that occur at low probability from months to decades later. To some, EDCs are that shiny new toy that cause people to lose historical perspective — think Andy’s enthusiastic embrace of Buzz Lightyear at the expense of Sheriff Woody in the first Toy Story movie.

For decades, regulators around the globe have effectively identified and managed various chemical threats based on preventing adverse effects without knowing the specific modes of action by which they cause toxicity. Indeed, many of the chemicals most often cited as EDCs (e.g., polychlorinated dioxins and biphenyls, organochlorine insecticides such as DDT, dieldrin and lindane, nonylphenol ethoxylate surfactants, etc.) have been banned or voluntarily withdrawn from the market without knowledge of the precise mode of action or mechanisms by which they act.

In advice directed at clinicians, Solomon and Janssen tacitly acknowledged that there is nothing that differentiates risk communication for EDCs from other environmental health concerns:

“The overall approach involves having some knowledge of the toxicity of the contaminant of concern, assessing the route and likelihood of exposure, and being able to communicate a science-based approach to reducing unnecessary exposures.”  

Notably, Solomon and Janssen failed to address the likely quantity or duration of exposure. However, later in the same article they wrote: 

“It is often impossible to quantify or predict how much greater risk a person faces from an environmental exposure. In most situations, the exposure happens only once or a few times, is at a low concentration, and it is not likely to substantially increase the risk of adverse effects above that seen in the general population. In addition, there is often little or nothing that can be done in retrospect about the exposure incident. Providers can use this opportunity to offer reassurance and to educate the patient on how to reduce future exposures.”

Need for Balance in Risk Communication

All individuals, regardless of whether they are technical experts, policymakers or laypersons, require credible, trustworthy, balanced and understandable information about the scientific evidence of environmental health threats, including exposures to EDCs, in order to make informed risk decisions. However, As David Ropeik, formerly at the Harvard Center for Risk Assessment and now a consultant, complained in a recent blog, too often the media jumps at stories that they know will scare the public—“if it scares it airs” —, but deliberately shy away from stories that provide reassurance. How are the public supposed to make good decisions when they get only half the story?

Ropeik cited some specific examples to drive home the need for balance: 

“Incomplete or imbalanced and alarmist information can lead directly to harmful decisions—like a pregnant mother who, to protect her unborn child, foregoes eating seafood because she is unaware of the potential cons and pros of eating certain species of fish. Fear of vaccines contributes to reduced immunization rates and the return of nearly eradicated diseases. Fear of processed milk leads some to choose raw milk despite the vastly increased likelihood of illness or death from pathogens. Moreover, selectively alarmist coverage can harm us just by making us worried. In a contest between stress and BPA or mercury, stress is far and away the greater risk.  The more worried we are, the worse it is for our health. The stress from alarmism is a risk all by itself.”

He also discussed the risk-reward tradeoffs that exist, and pointed out the need for effective risk communication to include a discussion of the benefits that a particularly technology, such as the specific chemical plays in a consumer product, so that those receiving the information can make better informed choices.

Recently, several Nordic countries met to exchange their experiences with risk communication and concluded that it should be positive, warm and focused on delivering a relatively few, simple, practical tips. They warned about alarming pregnant women about chemicals risks with the unwanted consequence that they won’t breastfeed their babies. Instead, keep messages in a positive and constructive tone, they said. Use channels to reach audiences that they trust, including social media, as appropriate. Their final advice was to present comparative risks to other better known, and more well-established factors.

Scientists Need to Demonstrate Greater Humility

In a seminal paper published in 2005, Dr. John Ioannidis demonstrated that the majority of scientific findings published in peer-review journals are later proved to be wrong. Specifically, he wrote:

“There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims. However, this should not be surprising. It can be proven that most claimed research findings are false.”

Ioannidis cited multiple reasons for the phenomenon, including small sample sizes and misuse or misinterpretation of statistical tests and a lack of adherence to common standards of study design, measurements and analysis.

Many hoped that Ioannidis’ paper would cause academic researchers to become more circumspect and cautious, but unfortunately, more than ten years later nothing much has changed and many researchers continue to over-hype their findings much to the detriment of the science, medical practice and public policy.

Even Solomon and Janssen acknowledged the importance of humility in communicating risk:

“It is important to approach questions of risk humbly with an understanding of the limitations of the science and the importance of the social context. It is also important to understand factors that contribute to different perceptions of risk to anticipate ways patients or communities may react to a hazard.”

Just recently at the above referenced SETAC conference, Sofie Vanthournout from Sense about Science, a UK charity that promotes the public understanding of science, stressed that risk communication should focus on the public’s concerns and needs, rather than scientists simply telling them what we think they need to know.

Less arrogance and greater humility are needed when communicating with the public about health risks.

Acknowledge Uncertainty and Alternative Viewpoints

As noted above, the science is most often incomplete and substantial uncertainties exist. Rarely is there complete consensus among scientists on the risk of a given threat, and even then that consensus can be wrong. For risk communication to be credible, it must acknowledge the uncertainties and opposing viewpoints. Scientists and regulators who engage the public in risk communication must:

  • Restrict their communications to areas in which they have expertise. 
  • Present information accurately, in clear, understandable terms. 
  • Disclose relevant interests. 
  • Discuss weaknesses and limitations of their work and opinions. 
  • Outline uncertainties and opposing scientific views.

Employ the International Consensus Definition of an EDC

All too often those communicating about EDCs are mistakenly conflating the terms “endocrine activity” with “endocrine disruption”. The distinction is very important and needs to be maintained. The World Health Organization (WHO), International Programme on Chemical Safety (IPCS) defines an endocrine disrupting chemical as “an exogenous substance or mixture that alters function(s) of the endocrine system and consequently causes adverse health effects in an intact organism, or its progeny, or (sub)populations.”

This internationally accepted definition of an endocrine disruptor has two very important elements: first, that the substance alters the function of the hormonal system, and second, by doing so causes an adverse health effect (i.e. toxicity). The likelihood that an endocrine disruptor will cause harmful effects is based on its potency (how active it is) and potential for exposure (dosage, frequency, and duration). The definition is important so as not to confuse beneficial or neutral “endocrine effects” with negative “endocrine disruption,” with the latter term being linked to adverse health effects. 

Scientists and regulators who engage in risk communication about endocrine disruption should use accurate and precise characterizations and refrain from applying labels that imply concern, when that concern is not supported by the evidence.

Communicate Risk and Not Merely Hazard Alone

For effective risk communication to take place, a risk assessment must first be done, even if it is qualitative rather than quantitative. Risk Assessment is the analysis of the possibility of a harm arising from a particular exposure to a chemical substance, under specific conditions. Risk is a function of the inherent hazardous properties of a chemical substance, but also of its potency (the relative steepness of the dose-response relationship) and the potential for exposure.

Much too often, the public receives only hazard information (e.g., chemical X is likely to cause cancer), but does not receive adequate information about potency or about the likelihood and/or magnitude of their exposure in the course of their daily activities. If there is little or no exposure, then there is little to no possibility that harm will occur.

As an example, a chemist in Langley who also writes a blog, recently complained about a new study that found food can linings still contained trace levels of BPA and its substitutes. To quote from the blogger: 

“The scientists who conducted the study used a metal tool to scrape the inside of the cans to get a sample of the coating which they then analyzed using a Fourier transformed infrared (FTIR) spectroscopy. The result was simply an identification of the polymer used in the can lining, nothing more. Hoping for more (like say a concentration or a detection limit) I went to look at the raw data at Healthy Stuff (the people who actually conducted the science part of the study) and all they provided was a spreadsheet saying what polymer had been identified in what can. To be clear here, they didn’t test any food from these cans. Moreover, they had to use aggressive techniques to pull off enough material from the cans and lids to do their testing (because it is affixed so firmly) and even then they didn’t even tell us how much of the stuff was in there?”

Studies such as this one serve a dubious purpose and are incapable of informing the public about the risks they face from chemical exposures.  Instead, they confuse rather than enlighten us.

California’s Proposition 65 (more formally known as the Safe Drinking Water and Toxic Enforcement Act of 1986) is yet another example of where communicating hazard only, without the context of exposure or risk, can mislead and confuse the public. In a recent editorial entitled “Warning: Too many warning signs are bad for your health”, the Los Angeles Times complained that the law requires posting of signs that don’t provide the context to help people make educated decisions about the risk they face. The Times called for the law to be fixed or replaced. 

Communicate Absolute Risk in Addition to Relative Risk

Human observational epidemiology studies are frequently the scientific source of much of the medical findings that consumers read about. The results are often expressed as a relative risk, i.e., the risk of disease among persons who have in common exposure to some particular risk factor, e.g., a chemical substance, relative to those who did not experience such exposure.


Regrettably, the scientists who conduct and communicate such studies almost always focus exclusively on relative risk and totally ignore absolute risk. So, it is common that we read that exposure to chemical X increases your risk by two, three or an even greater amount. However, absolute risk is critical to understanding the magnitude of the threat. 

Kevin Lomangino, the managing editor of has urged the use of absolute risk and has even published a primer on the topic for reporters. Small relative risk values, when consistent, are important when the number of people affected is large. However, a large relative risk of a rare disease amounts to only a small absolute risk, which may reasonably be considered not meaningful, either by public health planners or by individuals assessing their own choices. By contrast, a small relative risk may amount to a large number of cases for a common disease.  

Scientists and regulators need to include measures of absolute risk when communicating with consumers so that they can better place the risks in context.

Acknowledge the Existence and Influence of Reporting/Publication Bias

Hazard and risk assessments may overstate the strength and weight of scientific evidence that underlies their assumption that a chemical is associated with a particular adverse health effect because it has been shown repeatedly that studies which fail to find an association between an exposure and a health effect (i.e., negative or near null findings) are less likely to be published in the scientific literature.

There’s even a name for this phenomenon: “publication bias”. It is but one form of what is referred to as reporting bias. Studies have been done which estimate that positive findings are 4-5 times more likely to be published in the literature than negative or near null findings. There are two reasons for this:

    (1) Academic scientists are less likely to seek to publish near null results — they don’t view it as a good use of their time and negative/near null results are perceived as not helpful to career advancement. Instead, they stick them in a file drawer. Of course, scientists are also less likely to pursue publication of near negative/near null results because they have learned that journal publishers and editors don’t like them either and are more likely to reject them. 

    (2) Most reputable journals receive far more manuscripts than they have space to publish and so they have to prioritize them. Some journals such as Science and Nature, consider themselves as a significant source of news for the public and again prioritize articles that show adverse effects.

The consequences of publication bias can be severe. It distorts the scientific record.  Clinicians and policy makers may be seriously misled. Conclusions derived from critical reviews of existing evidence that are based only on a review of published data should be interpreted cautiously, especially for observational studies which have been shown to be particularly vulnerable to the phenomenon. 

As a consequence, the problem, although well recognized, persists, and is another source of the uncertainties discussed above.

Scientists must be encouraged to publish their results, regardless of the direction of the findings. Journal editors need to become more receptive to publishing such results.  And scientists and regulators need to be aware of and become sensitive to the phenomenon and exercise caution when communicating about risks.  

Activism is Jeopardizing the Integrity of Science and Risk Communication

Finally, public health and environmental activism is on the rise and is increasingly influencing the design, analysis and reporting of scientific research and thus jeopardizing the integrity and credibility or risk communication.

This is certainly the case for EDCs, whereby a group of US-based scientists has aggressively lobbied the European Commission on its proposed criteria for identifying EDCs, even going so far as to conduct studies using questionable methodology leading to grossly exaggerated health burden costs they attribute to EDC exposures.  

Geoffrey Kabat, author of Getting Risk Right: Understanding the Science of Elusive Health Risks recently published a blog in which he summarized a keynote address given by the surgeon and writer Atul Gawande:

“Gawande opened by giving several definitions of science that emphasized what an unusual and delicate balance science represents. ‘…having a scientific understanding of the world is really about helping people to understand how you judge which information to trust, while understanding that the scientific mindset is one where you never have complete trust.

He went on to describe the increasing prevalence of mistrust of science over the past four decades. And he acknowledged that, in fact, much of what is published is wrong and that the scientific consensus can be wrong. The task, as he framed it, is to distinguish science from pseudoscience, which has certain distinct hallmarks. The key, is to go back to what genuine science looks like as opposed to what pseudoscience looks like, he argued.

As an example of how to decide what to believe on a contentious question, Gawande took the case of BPA and used the discussion of the chemical in my book Getting Risk Right

He extracted about half a dozen criteria for making a judgment:

  • Which side tends to favor the data that supports their theory?
  • Which side tends to cherry-pick the data?
  • Which side is more likely to narrow the focus to the papers that bolstered their point-of-view?
  • Which ones grappled with the weakness of the BPA effect?
  • Which papers were searching for alternative explanations?
  • Which ones cited the contrary data?
  • Which ones assessed the totality of the views vs. taking a litigious position?

Gawande continued: 

“And you can never rule it out, but his [Kabat’s] conclusion was that one direction was quite clear. The groups that were doing the science and approaching it in a more scientific way consistently came out with the finding that BPA was not a significant health threat. This is an approach for being able to arrive at a way of finding your way through the maze – you look at what is the scientific approach.”

Kabat further opined “One could add further criteria to the list. For example, which studies come from groups which have a clear professional stake in the hypothesis, by virtue of having devoted much of their career to this question? Also, which side tends to resort to extra-scientific arguments to score points, such as asserting that the opposing side has conflicts-of-interest, rather than keeping the discussion focused on the science? These are quite regular features of controversies in the area of public health.”

Kabat and Gawande make a strong case that scientist activism can be problematic.  That is not to say that scientists should be automatically disqualified from engaging in policy debates for they have an important perspective that needs to be heard. However, scientists who wish to engage in advocacy need to adopt and practice a code of conduct to help ensure protection of the integrity of science and of risk communication. 

In Conclusion

Communicating risks about EDCs presents no unique challenges compared with communicating risks about many other potential environmental health threats. Effective risk communication must be scientifically-based, credible, trustworthy, balanced, present benefits and risks, and targeted in an understandable way for the intended audiences. It should be driven by the needs of the public, be constructive and focus on a few simple, practical tips. It should be delivered humbly, and openly acknowledge uncertainties and credible alternative viewpoints.

Communication about risks from EDCs should employ the internationally recognized WHO/IPCS definition, and distinguish between mere endocrine activity and true disruption. In order to provide people with information useful for making choices, the focus should be on communicating risks and not hazard alone. Absolute risks and risks of exposures to better known and more well-established health threats should also be communicated to provide audiences with important context and perspective. Risk communicators must try to remain objective, restrict themselves to areas in which they have expertise, disclose relevant interests, and seek to avoid letting their personal interests cloud their messages.