Science Progress | Where science, technology, and progressive policy meet

Neuroethics 101

Bioethics Meets Mind Reading

Neurons SOURCE: New technologies enable scientists to understand, alter, and enhance our brains. These raise a host of policy-relevant questions about privacy, social and political coercion, access to technology and therapy.

Mind reading, enhanced concentration, the end of sleep—as surreal as these notions might seem, they are gradually becoming a way of life. The products of neuroscience are changing the way many individuals work, play, and socialize—and will only continue to do so. These technologies are also changing the way the military trains soldiers, and they are changing the way the government interrogates witnesses and detainees. These prospects frighten some and fascinate others, especially as techniques enabling scientists to understand, alter, and enhance our brains advance at a breakneck pace. This technology will influence personal lives, social norms, and government policy.

At the governmental level, there are a host of questions about safety, privacy, social and political coercion, access to technology and therapy, and the direction of government-funded neuroscience research. Informing these policy questions is “neuroethics”—the field of bioethics that studies the values and principles involved in researching and manipulating the brain. Neruoethics also considers how individuals should conceptualize the brain as an integral part of the human person, especially as individuals attempt to alter or enhance their brains and nervous systems.


One of the major reasons drugs or devices that affect the brain might seem more dangerous than those that affect other parts of the body is because the brain is a complex system. Tampering with it can be alarming because the mind is inextricably linked to conceptions of who we are as individuals. This makes the side effects of trying to alter or enhance the brain loom large.

For example, although Ritalin has proven to be quite helpful for children, researchers still do not understand its long-term effects, such as whether it might speed up cognitive decline in old age. Memory-enhancement drugs like Donepezil might also prevent those who take them from properly understanding, integrating, and relating information. In fact, it is precisely because of these side effects that the clinical neuroscience community is moving away from drug-based approaches to brain enhancement, which work by trial and error, and is focusing instead on brain stimulation technologies, which deliberately target specific areas of the brain. Regardless of therapy, the FDA must regulate brain enhancement technologies in order to make sure that the benefits outweigh the risks. The analysis that guided the FDA’s July 2005 approval of technology for vagus nerve stimulation for the treatment of depression was insufficient. The technology did not confer a statistically significant benefit, and was approved because the Agency deemed it harmless—but safety alone is an insufficient criterion for therapeutic approval. The FDA needs to be more critical in its evaluation of future enhancement technologies.

Privacy and Mind Reading

Experiments that employ brain-imaging technologies have given scientists a new vision of the brain and its operations—in some cases allowing researchers to build computer programs that understand what people are thinking.

Scientists employ a variety of techniques that allow them to non-invasively peer into the heads of study participants. The most common of these is functional magnetic resonance imaging, or fMRI, which allows observation of oxygen levels in the brain that are associated with neural activity. Other techniques include event-related potentials (ERP), which can detect and average electrical impulses across larger areas of the brain and produce readings on the millisecond scale; and transcranial magnetic stimulation (TMS), which uses strong magnetic impulses to stimulate or inactivate small targeted areas of the brain for research or therapeutic purposes.

Already, fMRI experiments can tell researchers the intentions of a subject presented with the option to add or subtract two numbers. The ERP technique can reveal whether a subject has seen an object before, or—to a certain extent—whether a subject is lying. Scientists at the University of Pennsylvania have even derived an algorithm from fMRI observations of subjects that can tell the difference between a lying brain and a truth-telling brain. There is even evidence that fMRI can decode the brain’s visual cortex and tell us what a subject is looking at. The ethical questions that follow some of these technologies raise serious issues in the legal realm. Could an fMRI scan become admissible in court as evidence of malicious intentions? More ominously, does this so-called “brain-reading” portend a future where the inside of a person’s head is no longer sacrosanct? Could the government exploit this technology for national security purposes? If so, what does that mean for the freedom of human thought?

Although much of this research is benign, aimed simply at better understanding the brain or possibly helping to cure brain disorders, some of the more controversial forms of research—such as memory and lie-detection—have been funded by the Department of Defense through DARPA, and are intended for national security purposes. Since fMRI machines are large and expensive and the scans take time, DARPA has been developing more portable lie and memory-detectors by using wireless near-infrared technology to scan brains from a distance in airports or other secure areas.

Government and Social Coercion

As a result of DARPA’s Augmented Cognition (AugCog) project, U.S. soldiers may eventually carry equipment that integrates directly with their brains. Despite the program’s recent completion, its aims live on in other brain research initiatives. One project monitors soldiers’ brains as they rapidly scan intelligence photos: as many as 10 to 20 a second. Computers can then assess the fluctuations in their attention levels and determine whether the soldier’s brain finds the image relevant before the soldier is even consciously aware of the image. The computer can then store the relevant images for later viewing. Similar technology could also operate in a soldier’s binoculars and induce increased attention in the soldier when his visual system detects a relevant stimulus—again, all before he is even aware of the stimulus and consciously attends to it. Commanders can also assess the stress levels of soldiers in the field and shift tasks to other soldiers so that work is distributed for maximum efficiency. Electrocardiogram (ECG) and electroencephalogram (EEG) sensors in soldiers’ helmets could monitor vitals—or computer chips might even be implanted directly in a soldier’s brain.

Today pharmaceuticals like Adderall can enhance cognition, and others, like Prozac, can enhance mood. Drugs such as Provigil can even keep us awake for extended periods of time. Of course, their stated medical purpose is to help those with neuropsychological disorders—attention deficit disorder, depression, and narcolepsy, respectively. Nevertheless, ambitious college students, academics, and professionals are taking Adderall and Ritalin for enhancement purposes, either through off-label prescriptions or—as is the case with up to 25% of students at some colleges—without prescriptions. The journal Nature recently released the results from a survey of 1,400 of its readers. It found that one-in-five respondents had used drugs for non-medical reasons to enhance focus, concentration, or memory. Of those users, 62% have taken methylphenidate (Ritalin) and 44% have taken modafinil (Provigil).

One perspective is that these pharmaceutical forms of self-improvement should come out of the shadows so that individuals can exercise their freedom to alter their capabilities without stigma. The opposing viewpoint is that these drugs pose a significant problem because the usage and acceptance of brain-enhancement by society at large will pressure even the most enhancement-resistant citizens to finally give in or be left behind in the cognitive dust. And although many people may shudder at the notion of required enhancement in order to keep a job or stay in school, it would be equally coercive to forbid enhancements for all just to protect those who might choose not to take them. Even assuming that brain enhancement is safe and effective, access to cognitive enhancement technologies will likely be distributed no better than current healthcare technologies; but again, this is not necessarily a reason to forbid brain enhancement for all. But if brain-boosting drugs were to further contribute to social stratification, it might provide a rationale for government subsidies for those enhancements through existing healthcare frameworks.

Because school and workplace are obvious situations where required enhancement would raise serious ethical questions, it is likely that the first policies will be formed at the local level with school boards, employers, contractors, unions, and local governments. Will these policies require, forbid, or simply allow pharmaceutical enhancement in these contexts, or will it be some nuanced mixture? Already Connecticut has a statute that prohibits “any school personnel from recommending the use of psychotropic drugs for any child.” The state also has another statute that prohibits children from being taken into state or court custody because their parents or guardians refuse to administer psychotropic drugs to the child.

Most importantly, policies must be sensitive to special populations within society that might be subjected to certain kinds of brain enhancement or brain manipulation coercively: these groups include military personnel, prisoners and detainees, and criminal suspects and witnesses. The military has an obvious interest in enhancing the capabilities of soldiers so they are able to fight for long periods of time without sleeping, and remain alert and attentive. Pharmaceuticals might also make soldiers less sensitive to psychological trauma. This could prevent many soldiers from suffering post-traumatic stress disorder—an obvious benefit. But another conceivable effect of limiting the psychological impacts of warfare might also make military personnel less risk averse or less empathic—effectively turning them into guiltless killing machines. Are these the kind of soldiers that America wants returning from the battlefield? Are these the kind of soldiers that America even wants on the battlefield in the first place?

Coercive brain alteration also raises ethical concerns for the treatment of convicted criminals. Some courts have taken it upon themselves to mete out so-called “therapeutic justice”: both to sex offenders by requiring them to take androgens, to reduce sex drive, and also to violent criminals by requiring them to take drugs known as selective-serotonin reuptake inhibitors (SSRIs), like Prozac, to reduce their impulsiveness.

In the criminal justice system, issues of privacy and coercion merge when considering drugs like oxytoic, which can compel detainees, witnesses, or suspects to tell the truth or act friendlier to their interrogators. This is different from peering into a person’s brain, since the subject is aware that he or she is actively giving up the information—but is it still free will when that will is bent by a drug? Are there circumstances in which authorities should be allowed to slip coercive drugs to detainees without their knowledge?

Limitations of the Technology and Policies for Advancement

These brain-imaging studies have significant limitations with the current technology. For instance, fMRIs only look at the increase in blood oxygen levels associated with brain activity and not the activity itself. Also, in order for experiments to provide us with relevant information, they must be carried out with highly specific designs: conditions must have discreet variables so that precise differences in each metal state are discernible. Complicating this is the fact that some regions of the brain have multiple functions.

The most relevant policy consideration regarding brain imaging is to fund the advancement of the neuroimaging apparatuses. In order to improve these imaging technologies and develop smaller, less expensive imaging machines, biological and physical scientists need to work together to determine how to unlock clear, real-time images from the brain’s electrochemical signals. This kind of research is difficult, given the current structure of scientific funding in this country, which assigns physical science research to the National Science Foundation and biological science research to the National Institutes of Health. Since neuroimaging lies at the intersection of both agencies’ jurisdictions, more joint-funding mechanisms—or even a restructuring of both agencies into one—would make sense.

Another consideration for science-funding policy is that private companies will conduct certain kinds of research that they will not publish or leave open for review. Advertising agencies have already entered into the field of neuromarketing; fMRI research probing how the brain responds to advertisements. Private companies such as Cephos and the aptly named No Lie MRI have also begun to offer fMRI lie-detection services. However, their clients have largely been married couples who suspect each other of cheating. Private companies may conduct less-than-thorough studies and make unsubstantiated claims about neurological research; appropriate oversight will be necessary to protect public interests and health.


Neuroscience research will continue to progress. Whether the research is done by the government or by private companies, in the U.S. or abroad, someone will fund it. Regardless of where the brain-enhancing drugs are made, they will eventually make it into the hands of the people who are willing to pay for them. Funding, marketing, and regulation will shape the impact of neuroscience research on society. These forces will determine what kinds of research are carried out, the purpose of that research, and subjects on which it will be performed. They will determine what institutions or corporations will get access to the technology and innovations that result from that research. Finally, these forces will determine which citizens will have access to the products of neuroscience. Therefore, public policies must ensure rigorous research standards, protect privacy, prevent coercion, and an aim for an equitable distribution of benefits.

Michael Rugnetta is a Fellows Assistant at the Center for American Progress.

Tags: ,

Comments on this article

By clicking and submitting a comment I acknowledge the Science Progress Privacy Policy and agree to the Science Progress Terms of Use. I understand that my comments are also being governed by Facebook's Terms of Use and Privacy Policy.