Due to unanswered questions into the origins of the coronavirus pandemic, both the U.S. government and scientists have called for a deeper examination into the validity of claims that a virus could have escaped from a lab in Wuhan, China.
Much of the discussion surrounds “gain-of-function” research. So The Conversation asked David Gillum and Rebecca Moritz, who work closely with virologists on a day-to-day basis to ensure the safety and security of the research, and Sam Weiss Evans and Megan Palmer, who are science and technology policy experts, to explain what this term means and why this kind of research is important.
What does gain of function mean?
Any organism can acquire a new ability or property, or “gain” a “function.” This can happen through natural selection or a researcher’s experiments. In research, many different types of experiments generate functions, and some pose certain safety and security concerns.
Scientists use a variety of techniques to modify organisms depending on the properties of the organism itself and the end goal. Some of these methods involve directly making changes at the level of genetic code. Others may involve placing organisms in environments that select for functions linked to genetic changes.
Gain of function can occur in an organism in either nature or the laboratory. Some lab examples include creating more salt- and drought-resistant plants or modifying disease vectors to produce mosquitoes that are resistant to transmitting dengue fever. Gain of function can also be useful for environmental reasons, such as modifying E. coli so that it can convert plastic waste into a valuable commodity.
In the current debate around SARS-CoV-2, the virus that causes COVID-19, gain of function has a much narrower meaning related to a virus becoming easier to move between humans, or becoming more lethal in humans. It is important to remember, though, that the term “gain of function” by itself covers much more than this type of research.
picture alliance via Getty provided by The Conversation,
Why would researchers do gain-of-function work on potentially dangerous pathogens?
Gain-of-function experiments may help researchers test scientific theories, develop new technologies and find treatments for infectious diseases. For example, in 2003, when the original SARS-CoV outbreak occurred, researchers developed a method to study the virus in the laboratory. One of the experiments was to grow the virus in mice so they could study it. This work led to a model for researching the virus and testing potential vaccines and treatments.
Gain-of-function research that focuses on potential pandemic pathogens has been supported on the premise that it will help researchers better understand the evolving pathogenic landscape, be better prepared for a pandemic response and develop treatments and countermeasures.
But critics argue that this research to anticipate potential pandemic pathogens does not lead to substantial benefit and is not worth the potential risks. And they say getting out ahead of such threats can be achieved through other means – biological research and otherwise. For instance, the current pandemic has provided numerous lessons on the social and behavioral dynamics of disease prevention measures, which could lead to robust new research programs on the cultural aspects of pandemic preparedness. Understanding when the risks of gain-of-function research outweigh the potential benefits and alternatives, therefore, continues to be subject to debate.
What are some examples of gain-of-function research, and how risky is it?
Some potential outcomes of gain-of-function research may include the creation of organisms that are more transmissible or more virulent than the original organism or those that evade current detection methods and available treatments. Other examples include engineering organisms that can evade current detection methods and available treatments, or grow in another part of an organism, such as the ability to cross the blood-brain barrier.
There is no such thing as zero risk in conducting experiments. So the question is whether certain gain-of-function research can be performed at an acceptable level of safety and security by utilizing risk-mitigation measures. These strategies for reducing risk include the use of biocontainment facilities, exposure control plans, strict operating procedures and training, incident response planning and much more. These efforts involve dedication and meticulous attention to detail at multiple levels of an institution.
Lab incidents will still occur. A robust biosafety and biosecurity system, along with appropriate institutional response, helps to ensure that these incidents are inconsequential. The challenge is to make sure that any research conducted – gain-of-function or otherwise – doesn’t pose unreasonable risks to researchers, the public and the environment.
Determining whether specific experiments with potential pathogens should be conducted remains a difficult and contentious topic.
How do experts determine which gain-of-function research poses too much risk?
There are multiple ways to answer this question. The first is if the research is intended to develop a biological weapon. The United Nations Biological Weapons Convention, which went into effect in 1975, forbids state parties from developing, producing, stockpiling, or otherwise acquiring or sharing biological agents, toxins and equipment that have no justification for peaceful or defensive purposes. There should be no research, then, whether gain-of-function or otherwise, that seeks to purposefully develop a biological weapon.
Another way to answer the question is by focusing on the content of the research, rather than its intent. Through experience, researchers and governments have developed lists of both experiments and organisms that need additional oversight because of their potential safety and security risks. One example of this arose when flu researchers placed a self-imposed pause on gain-of-function research involving the transmissibility of highly pathogenic avian influenza H5N1 viruses in 2012. The U.S. government subsequently imposed a moratorium on the work in 2014. Both moratoriums were lifted by the end of 2017 following a lengthy debate and study of the risks and the development of additional oversight and reporting requirements.
In the past decade, the United States has developed oversight for research that could be directly misused for nefarious purposes. This includes policies on “dual-use research of concern” (DURC) and policies on “pathogens of pandemic potential” enhanced to gain transmissibility or virulence.
The main point is that our understanding is constantly evolving. Just before the COVID-19 pandemic began, the U.S. government had started to review and update its policies. It is an open question what lessons will be learned from this pandemic, and how that will reshape our understanding of the value of gain-of-function research. One thing that is likely to happen, though, is that we will rethink the assumptions we have been making about the relationships between biological research, security and society. This may be an opportunity to review and enhance systems of biosecurity and biosafety governance.
David Gillum, Senior Director of Environmental Health and Safety and Chief Safety Officer, Arizona State University and Rebecca Moritz, Biosafety Director and Responsible Official, Colorado State University. This article is republished from The Conversation under a Creative Commons license. Read the original article. Disclosures: David Gillum is the past president of the American Biological Safety Association (ABSA) International. He is a past-judge and member of the safety and security committee for the International Genetically Engineered Machine Competition. Megan J. Palmer receives funding from the Open Philanthropy Project and the Nuclear Threat Initiative. She is on the Council of the Engineering Biology Research Consortium, co-chairs a World Economic Forum Global Future Council on Synthetic Biology, is an Advisor to the International Genetically Engineered Machine Competition, is a member of a World Health Organization Working Group on the Responsible Use of Life Sciences, and is a member of the Board of Directors of Revive and Restore. Sam Weiss Evans receives funding from the Schmidt Futures Foundation. He is a member of the Engineering Biology Research Consortium’s Security Working Group, and an Advisor to the international Genetically Engineered Machines Competition. Rebecca Moritz does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.