Science Progress | Where science, technology, and progressive policy meet

Dual Use Dangers

Biotechnology and Global Security Are Intertwined

Members of the Japanese National Police Agency's Nuclear, Biological, Chemical Terrorism Investigative Unit SOURCE: AP Preventing the misuse of biotechnology for the creation of weapons of almost unimaginable horror will require coordinated global action that only an international forum can muster. In the photo: members of the Japanese National Police Agency's Nuclear, Biological, Chemical Terrorism Investigative Unit.

In a cautionary message to the world just before his retirement a year ago, then U.N. Secretary-General Kofi Annan said that as biological research expands and advanced biotechnologies become increasingly available, the associated safety and security risks will increase exponentially. “When used negligently, or misused deliberately, biotechnology could inflict the most profound human suffering—ranging from the accidental release of disease agents into the environment to intentional disease outbreaks caused by state or non-state actors,” he warned. “Soon, tens of thousands of laboratories worldwide will be operating, in a multi-billion dollar industry. Even novices working in small laboratories will be able to carry out gene manipulation.”[1]

Annan then pointed out that the world lacks a system of effective safeguards for managing the risks of biotechnology. “Scientists may do their best to follow rules for the responsible conduct of research,” he said. “But efforts to harmonize these rules on a global level are outpaced by the galloping advance of science itself, and by changes in the way it is practiced.” To address this policy challenge, Annan called for the creation of a “global forum” under United Nations auspices so that representatives from governments, industry, science, public health, law enforcement, and the public could discuss how to ensure that biotechnology serves the common good.

There is simply no way the United States can tackle this dangerous problem on its own.

Since Annan’s retirement, however, the idea of a global forum has languished. It’s time for his successor, U.N. Secretary-General Ban Ki-Moon, to take up the proposal and make it a reality. The reason: Preventing the misuse of biotechnology for the creation of weapons of almost unimaginable horror will require coordinated global action that only an international forum can muster. There is simply no way the United States can tackle this dangerous problem on its own.

The Challenge of “Dual-Use” Research

So what are the security implications of biotechnology that prompted Annan’s proposal? One concern is that many biotech facilities, equipment, and materials are inherently “dual-use,” meaning that they can be applied either for legitimate civilian purposes or for the development and production of biological weapons. Moreover, whereas biotechnology was once the exclusive domain of advanced industrial countries such as the United States, Western Europe, and Japan, it is now a major focus of investment by developing countries such as China, Cuba, India, Indonesia, Malaysia, Singapore, South Korea, South Africa, and Taiwan.

There are many good reasons for the spread of biotechnology: it can enhance public health, improve agricultural yields, and foster economic development. Yet the proliferation of dual-use biotechnologies to unstable regions of the world, where war, trafficking, and terrorism are rife, is potentially a recipe for disaster. In addition to the risk that biotech facilities, equipment, and materials might be diverted to bioweapons production, state or non-state actors could conceivably exploit certain types of scientific information generated by biomedical research for hostile purposes.

Historically, scientists have viewed the discovery of new knowledge as an unalloyed good that contributes to human understanding of the natural world and leads to beneficial applications. But bioethicist Arthur Caplan of the University of Pennsylvania argues that some types of scientific information are dangerous in the wrong hands. “We have to get away from the ethos that knowledge is good, knowledge should be publicly available, that information will liberate us,” he says. “Information will kill us in the techno-terrorist age.” [2]

Given these concerns, a philosophical question facing the life-sciences community is whether certain areas of research constitute “forbidden knowledge” that should be banned or otherwise restricted on security grounds. Controversy over the risks of dual-use research first erupted in early 2001, when Australian scientists published a paper in the Journal of Virology reporting the unexpected finding that insertion of a single gene for an immune-system protein into the mousepox virus made this normally benign virus extremely lethal in mice, even those that had been vaccinated against it. Because bioweapons developers could potentially use the same manipulation to increase the lethality and vaccine resistance of related viruses that infect humans, such as smallpox and monkeypox, critics argued that the information was dangerous and should not have been published.

To examine this emerging debate, the National Research Council (the policy analysis arm of the U.S. National Academies) convened an expert committee chaired by Gerald Fink, a biology professor at the Massachusetts Institute of Technology. In late 2003, the Fink Committee released its report, Biotechnology in an Age of Terrorism. It concluded that certain types of basic research in the life sciences, although conducted for legitimate purposes, could indeed generate findings that might be misused by others to threaten public health or national security.

The Fink Committee identified seven “experiments of concern” that would render a pathogen more deadly or transmissible, able to infect additional species, resistant to existing vaccines or therapeutic drugs, easier to convert into a weapon, or capable of evading diagnostic or detection techniques.[3] In response to one of the committee’s recommendations, the Bush administration established the National Science Advisory Board for Biosecurity, which met for the first time in mid-2005.

The NSABB’s mandate is to develop criteria for identifying dual-use research, draft guidelines for the review and oversight of risky experiments, and recommend possible restrictions on the publication of sensitive data. The board consists of up to 25 voting members from the U.S. scientific and national security communities, along with non-voting representatives from the 15 federal agencies that conduct or support research in the life sciences.

Experiments of Concern

To date, synthetic biologists have demonstrated the basic concept by performing a series of ingenious parlor tricks.

To give a recent example of dual-use research, in June 2007 researchers at the Helmholtz Center for Infection Research in Braunschweig, Germany, reported in the journal Cell that they had altered the DNA of the Listeria bacterium, a human pathogen, to enable it to cause disease in mice, a species it does not naturally infect.[4] This finding opened the way to developing a mouse model of Listeria infection, an important step in developing new treatments for the disease.

Yet the experiment has troubling security implications because the technique used to modify the host range of the bacterium could potentially be applied in reverse, enabling an animal pathogen to infect humans. Despite this risk, the editors of Cell did not seek outside advice about whether to publish the study. Moreover, because the German researchers could have published their work in a European journal, this case suggests that U.S. controls on dual-use research will not be effective unless other countries sign on.

Another emerging area of biotechnology with security implications is synthetic biology, which involves the design and synthesis of long strands of DNA.[5] The DNA molecule encodes genetic information with an alphabet of four “letters,” or nucleotide bases (A, T, G, and C), which can be strung together in any conceivable sequence. The advent of automated DNA synthesizers has spawned a new industry in which hundreds of companies around the world—including firms in China, India, and Iran—can synthesize DNA sequences to order. A researcher seeking a particular piece of DNA simply goes to the supplier’s web site, enters the desired nucleotide sequence and a credit card number, and several days later a vial containing the synthetic DNA arrives in the mail.

A small fraction of DNA-synthesis companies, known as “gene foundries,” are capable of making gene-length strands of DNA consisting of thousands of nucleotide base-pairs. These segments can then be assembled in the right order to form an entire genome—the full complement of genes coding for a microorganism. Since 2002, scientists have used this technique to reconstruct two human viruses in the laboratory: poliovirus (7,741 base-pairs) and the formerly extinct Spanish influenza virus (13,500 base-pairs), the latter of which killed tens of millions of people during the worldwide pandemic of 1918-19. Both synthetic viruses have been shown to be infectious and capable of causing illness in experimental animals. Given the rapid advances in automated DNA synthesis, it is only a matter of time before it becomes possible to synthesize larger viruses in the laboratory, such as Ebola virus (about 19,000 base-pairs) or even smallpox virus (185,000 base-pairs). This development would make it possible to circumvent the physical security measures that currently keep such deadly pathogens out of the wrong hands.

A more ambitious goal of synthetic biology is to create novel genetic circuits that would enable microorganisms to perform practical tasks, with applications in medicine, computation, environmental remediation, and energy production. Futuristic examples are giving bacteria the ability to sequester carbon dioxide or to manufacture hydrogen fuel. To facilitate the design and construction of genetic circuits, Prof. Drew Endy and his colleagues in MIT’s Department of Biological Engineering have compiled a “tool kit” of pieces of DNA with well-characterized functions. The idea is to assemble these components into functional genetic circuits, much as electronic devices are built from transistors, resistors, and diodes.[6]

To date, synthetic biologists have demonstrated the basic concept by performing a series of ingenious parlor tricks. For example, they have designed genetic modules that cause bacteria to blink on and off like microscopic Christmas-tree lights, or to become light-sensitive so that a lawn of the bacteria behaves like a photographic plate.

Despite the potential benefits of synthetic biology, the field could provide individuals with malicious intent with new ways to cause harm. For example, it may become possible to engineer novel viral or bacterial genomes capable of expressing toxins or virulence factors for which no natural immune defenses exist, and against which existing therapeutic drugs are powerless.[7] In addition, as synthetic biology diffuses widely, a new breed of “biohackers” might emerge, intent on showing off their prowess by developing real viruses rather than digital ones.

The potential misuse of biotechnology for hostile purposes is not limited to the development of more deadly microbial and toxin agents. Following on the work of the Fink Committee, another National Research Council panel co-chaired by virologists Stanley Lemon and David Relman issued a report in early 2006 titled Globalization, Biosecurity, and the Future of the Life Sciences[8], which concluded that several other areas of biotechnology and biomedical research also pose dual-use dilemmas.

Much as a naked bullet is harmless, a pathogen in a test tube does not constitute a biological weapon.

The pharmaceutical industry, for example, is developing a class of drugs based on natural body chemicals called “bioregulators,” which act in low concentrations to control vital physiological functions such as blood pressure, heart rate, respiration, temperature, sleep, mood, and the immune response. Such research offers the promise of medications that have highly specific cellular targets, providing greater therapeutic efficacy with fewer side effects. Yet bioregulator-based drugs could also be designed to disrupt vital life processes or to alter consciousness and emotions, creating the basis for a new generation of lethal and incapacitating agents.[9] Similarly, advances in drug delivery systems, such as needle-free systems for administering insulin to diabetics in the form of an inhalable aerosol, have security implications because aerosolization is the optimal method for disseminating biowarfare agents over large areas.

It’s important to view these dual-use risks in context, however. Much as a naked bullet is harmless, a pathogen in a test tube does not constitute a biological weapon. To injure or kill, a bullet must be combined with a cartridge containing gunpowder and fired from a handgun or rifle. In much the same way, a biological weapon is a “system” consisting of a pathogen or toxin that has been processed and formulated to enhance its shelf life and facilitate its delivery as a fine-particle aerosol; a container, such as a spray tank or munition; and a delivery vehicle, such as an artillery piece or a tactical aircraft.

Because the weaponization and system-integration steps are complex, they create significant hurdles to the acquisition of biological weapons. Testing under realistic conditions is also essential, ideally with experimental animals. The Soviet Union, for example, did extensive field trials of bioweapons at a remote outdoor test site on Vozrozhdeniye Island in the Aral Sea. While it is doubtful that terrorist groups or rogue individuals could develop advanced bioweapons capable of inflicting mass casualties, such weapons are clearly within the scope of determined nation states.

Proposed Oversight Mechanisms

What can be done to manage the risks of dual-use research in the life sciences without causing significant harm to the scientific enterprise? To date, the U.S. National Science Advisory Board for Biosecurity has defined “dual-use research of concern” as follows:

Research that, based on current understanding, can be reasonably anticipated to provide knowledge, products, or technologies that could be directly misapplied by others to pose a threat to public health and safety, agricultural crops and other plants, animals, the environment, or materiel.[10]

This definition sets the threshold fairly high: The risk of misuse must arise directly from the research findings and have significant implications for public health, agriculture, or national security. For example, an experiment that creates a highly virulent organism, but one that cannot be readily transmitted to humans, would not be considered a major threat.

Any effective global system for regulating biotechnology will have to be based on common standards for laboratory security and research oversight, which countries would implement and enforce on a national basis.

The NSABB has also developed draft guidelines for the oversight of federally funded research in the life sciences to minimize the risk of misuse for hostile purposes. These guidelines make the principal investigator responsible for determining if a proposed research project meets the criteria of dual-use research of concern. If it does, then the proposal would be subjected to a security review by a biosafety committee affiliated with the scientist’s home institution. About 400 institutional biosafety committees at research centers across the United States currently review proposed experiments involving recombinant DNA. Expanding the purview of these committees to cover dual-use research would require reinforcing them with security experts. At present, the draft NSABB guidelines are under consideration by U.S. government agencies, which will ultimately decide how to implement them.

National biosecurity measures are not sufficient, however. Because biotechnology is a global activity, managing its downside risks will require adopting policies at the international level to ensure that only legitimate scientists have access to deadly pathogens and to oversee potentially dangerous research. At present, biosecurity rules vary widely from country to country, creating a regulatory patchwork with gaps and vulnerabilities that bioterrorists could exploit as targets of opportunity. Moreover, if other countries adopt weaker guidelines than those of the United States, then the anticipated security benefits of the U.S. regulations will not materialize and American researchers and scientific journals will find themselves at a competitive disadvantage.

Any effective global system for regulating biotechnology will have to be based on common standards for laboratory security and research oversight, which countries would implement and enforce on a national basis.[11] In 2006, the World Health Organization took a useful first step in this direction by issuing a set of guidelines for securing dangerous pathogens in locked cabinets, vetting laboratory personnel to make sure they are bona fide scientists, and keeping accurate records, but only some researchers have adopted these rules.[12]

More action is clearly needed. To prevent the misuse of biotechnology, the Lemon-Relman report called for creating a “web of prevention” extending from the individual scientist to global level. Key elements of this web are the norm against biological warfare enshrined in the Biological and Toxin Weapons Convention of 1972, transnational networks of scientists and other stakeholders, export-control regimes, professional codes of conduct, and educational and awareness efforts.

It is unclear, however, what institutional mechanism could serve to coordinate biosecurity measures at the global level. Although the Vienna-based International Atomic Energy Agency inspects civilian nuclear plants around the world to ensure that fissile materials are not diverted for nuclear weapons, it is a poor fit with biotechnology. The nuclear industry consists of a limited number of facilities and stocks of radioactive materials that are amenable to precise accounting, yet the biotechnology industry is extremely diffuse and involves the use of self-replicating organisms that cannot be tracked in a quantitative manner.

Harvard University biochemist Matthew Meselson has warned that 21st century biotechnology will make it possible not only to devise additional ways to destroy life but also to manipulate the processes of cognition, development, reproduction, and inheritance, creating “unprecedented opportunities for violence, coercion, repression, or subjugation.” He argues that “movement towards such a world would distort the accelerating revolution in biotechnology in ways that would vitiate its vast potential for beneficial application and could have inimical consequences for the course of civilization.”[13]

Given these very real dangers and the complex challenges of developing an international mechanism to manage the risks of biotechnology, there is an urgent need to establish Kofi Annan’s proposed “global forum.” U.N. Secretary-General Ban should not miss the opportunity to address one of the major security challenges of our time.

Jonathan B. Tucker is a Senior Fellow specializing in biological and chemical weapons issues in the Washington, DC office of the James Martin Center for Nonproliferation Studies of the Monterey Institute of International Studies.


1) Reuters, “U.N. Leader Urges Biotech Safeguards,” New York Times, November 19, 2006. Kofi Annan delivered the speech on accepting the Freedom Prize of the Max Schmidheiny Foundation at the University of St. Gallen, Switzerland, on November 18, 2006.

2) Arthur Caplan, quoted in Ronald M. Atlas, “Bioterrorism: The ASM Response,” ASM News [American Society for Microbiology], 68 (2002): 117-121.

3) Committee on Research Standards and Practices to Prevent the Destructive Applications of Biotechnology, Biotechnology Research in an Age of Terrorism (Washington, DC: National Academies Press, 2004): 5.

4) Rick Weiss, “Release of Microbe Study Spurs Bioterror Worries,” Washington Post, June 1, 2007, p. A06.

5) Jonathan B. Tucker and Raymond A. Zilinskas, “The Promise and Perils of Synthetic Biology,” New Atlantis, Spring 2006, available online here.

6) Drew Endy, “Foundations for Engineering Biology,” Nature, 438 (Nov. 24, 2005): 449-453.

7) National Science Advisory Board for Biosecurity (NSABB), Addressing Biosecurity Concerns Related to the Synthesis of Select Agents (Bethesda, MD: National Institutes of Health, December 2006): 3.

8) Committee on Advances in Technology and the Prevention of Their Application to Next Generation Biowarfare Threats, Globalization, Biosecurity and the Future of the Life Sciences (Washington, DC: National Academies Press, 2006).

9) Alan Pearson, “Incapacitating Biochemical Weapons: Science, Technology, and Policy for the 21st Century,” Nonproliferation Review, 13 (2) (July 2006): 151-188.

10) National Science Advisory Board for Biosecurity (NSABB), Proposed Framework for the Oversight of Dual Use Life Sciences Research: Strategies for Minimizing the Potential Misuse of Research Information (Bethesda, MD: National Institutes of Health, June 2007): 17.

11) Michael Barletta, Amy Sands, and Jonathan B. Tucker, “Keeping Track of Anthrax: The Case for a Biosecurity Convention,” Bulletin of the Atomic Scientists, 58 (3) (May/June 2002), pp. 57-62.

12) World Health Organization, Biorisk Management: Laboratory Biosecurity Guidance, WHO/CDS/EPR/2006.6, September 2006, available online here.

13) Matthew Meselson, “Averting the Hostile Exploitation of Biotechnology,” CBW Conventions Bulletin, no. 48 (June 2000): p. 16.

Comments on this article

By clicking and submitting a comment I acknowledge the Science Progress Privacy Policy and agree to the Science Progress Terms of Use. I understand that my comments are also being governed by Facebook's Terms of Use and Privacy Policy.