Sixty minutes goes by in the blink of an eye. It’s barely enough time to accomplish much of anything, really. But by the next tick of the long hand, two Americans will have lost their lives to acts of violence. In that same hour, 250 more will need medical treatment for a violence-related injury. As the hours pass, so mount the costs: on average $1.3 Million for each violent fatality and $80,000 for each non-fatal assault. Each year, nearly 3% of our country’s gross domestic product is lost due to violence.
As these staggering numbers make clear, violent crime is one of the most pressing public health problems of our age. Scientists have a duty to address large-scale social problems like violent crime, and scientific research aimed at preventing antisocial behavior would seem likely to provide a particularly good return on taxpayer investment. But to what extent can science actually help? I believe there is a considerable disconnect between the aims of science and the goals of criminal law, and that should lead us to be cautious.
There is broad support in both the U.S. and Europe for applying scientific methods and data to crime prevention. One potentially promising and exceptionally controversial zone of engagement is “prediction.” The effort to predict “future dangerousness” is motivated by the belief that we can reduce antisocial behavior by identifying those people most likely to commit crimes. But clearly prediction is a double-edged sword: while we can use this information to more efficiently target costly social resources toward preventing violence in at-risk children, labeling any child a “future criminal” is likely to have serious adverse consequences all its own. Similarly, it’s reasonable to think that scientific data could be a critical tool for evaluating the likelihood that an adult criminal will commit violent crime in the future. But if science is being used to make decisions about whether, or how long, to deprive someone of their freedom, it is imperative that we have confidence in the validity and reliability of our predictive tools.
SCIENCE FACT AND SCIENCE FICTION
In the “old days,” our predictive tools were blunt: the clinician’s hunch with its obvious limitations—lack of objectivity and reliability to name two—served as a gold standard. Newer methods borrow from the language and statistics of actuaries, who compute insurance risks, overcoming objectivity and reliability problems with mathematical rigor. Actuarial prediction approaches are highly structured. They asses each individual according to the same set of specific variables—such as age at first offense, gender, or diagnosis of substance abuse—to assign that person to a high, medium, or low risk level. Though actuarial methods are unquestionably more reliable and valid than clinical assessment, judges and juries have been slow to warm to prediction by such “bean counting.”
Enter the brain. Neuroscience has allowed us to peer into the black box of the human mind with a level of detail that would have been unthinkable twenty years ago. Advances in brain imaging and genomic science have begun to shed light on the biological origins of violence and antisocial behavior, spurring intense debate about their potential use as prediction tools. Some have embraced this potential with a particular eagerness, heralding the coming age of “neuroprediction.”
This enthusiasm is grounded in two assumptions. First is the belief that individual measures of biology have an intrinsic reliability and validity that non-biological tools lack. Second is that we can make determinations about specific individuals, which is the aim of criminal law, based on what we know of a general phenomenon from averaging scientific data across many, many individuals, which is the goal and method of science. Unfortunately, when it comes to something as complex and messy as human behavior, both of these assumptions can fail badly. Brain images and DNA sequences may some day prove useful for forecasting individual behavior. But for today, the tools of neuroscience are still far too crude and our understanding of the brain too imperfect to tout unabashedly the promise of neuroprediction. As the genetics example below illustrates, such a future has not yet arrived.
Imagine a line of hushed grade-schoolers snaking down the scrupulously white halls of an overly bright clinic. A nurse sweeps briskly from child to child, each offering a single index finger, upturned and extended. As she passes, a handheld device lightly grazes the succession of outstretched digits, drawing an aliquot of blood so small that it can barely be seen with the naked eye. The machine noiselessly sifts through the liquid to isolate each child’s genetic material; chromosome 5 is quickly scanned for a single letter at one specific position in the nucleotide sequence. The two options are “A” or “G.” The A’s will be free to leave. The G’s, marked with the genetic signature of violence, must stay behind for further evaluation.
The scene described above is clearly science fiction, but the notion that an individual’s DNA can be used to explain and predict their behavior is now taken quite seriously in many hallowed quarters. In several recent high-profile murder cases in the US and Europe, courts have permitted defendants to be tested for the so-called “warrior gene” and allowed positive results to be submitted as a mitigating factor. (The gene is called monamine oxidase A, or just “MAOA.”) When presented to sitting judges in mock trials, warrior gene evidence exerts a powerful effect on their punishment decisions, affirming the unconscious deference paid to biological explanations of human behavior—even when those explanations are wrong. You see, MAOA is not a warrior gene. In fact, there is not now, nor could there ever really be any such thing as a warrior gene. Why not?
Our genome is a set of construction documents that dictates, among other things, how our brain cells are built, function, and get wired together. Everything that we are, we are because of our brains. Every consequential thought and every meaningless derailment, every blush of malice and every bite of conscience, every rush of joy and every slow bloom of sadness, every act of generation and every movement towards destruction, all of it, arises from coherent patterns of firing brain cells.
Across the entire population, one readily observes that there is enormous variability in human behavior of all kinds; this variability in behavior is driven by dramatic variability in the way that our brains work. In turn, individual differences in brains are determined, in large part, by individual differences in genes. The “other part” is, of course, environment, which also shapes behavior by shaping brain fuction. So genes cause differences in behavior by causing differences in the way that each of our brains work. But the path from gene to behavior through the brain is a tortuous one indeed.
MAOA gained notoriety as a warrior gene from the study of a Dutch family. The men in this family were very violent and antisocial, and it was found that they all carried a very rare mutation that “knocked out” their MAOA gene. This kind of circumstance is incredibly uncommon, though. Most of the time, genes vary between people in very small ways. We all have all of the same genes, but slight differences in the form that those genes take between people change the way that they work. This is why it’s misleading to talk about “having” or “not having” the “warrior gene.” Everyone has the MAOA gene, but it can come in at least two very slightly different versions, or “alleles.” Early studies found that people who had one version—the Low version found in about one third of the population—were statistically more likely to be aggressive compared to folks who carried the other version—the High version.
But since these original studies, our understanding of genetics has advanced considerably. We now know that each of these small allele differences has an absolutely tiny effect on behavior when considered individually. There are millions of people who have the “bad” Low version of MAOA but who are not violent, and thousands of people who do not have the Low version but are violent. It takes more than one “bad” allele to produce a violent person; it takes hundreds, or even thousands. We also now understand that genetic differences rarely affect human behavior with the kind of selectivity or specificity desired and required by the law. While MAOA is thought of as a warrior gene, or as a violence gene, people with the Low version have been found to have such “un-warrior-like” syndromes as depression, schizophrenia, and panic disorder. Together, these two points highlight the idea that any test for a single genetic marker will likely be meaningless for either explaining or predicting human behavior.
SCIENCE AND THE LEGAL SYSTEM
There is a final specter that haunts the entire enterprise of neuroprediction: the group-to-individual, or G2I problem. This issue has its roots in a key difference between the aim and methods of science and the goals of the legal system. Science is focused on understanding universal phenomena; we do this by averaging data across groups of individuals. Law, on the other hand, only cares about specific individual people—the individual on trial. Neuroprediction is based largely on the assumption that you can individualize scientific data and inferences. If a study found that a certain allele in gene Y is statistically associated with violence risk, one might assume that finding out whether a person carries that allele would provide important information for determining whether he was likely to become violent. But this assumption is terminally flawed.
The same is true for brain imaging. If a study found that on average people with relatively lower fMRI signal in a specific brain region during a specific task were more likely to commit crimes—relative to people with a higher fMRI signal in that region—it does not follow that any one individual’s fMRI signal level will have any meaningful ability to predict crime. Because of these issues, I believe that it is extremely premature to talk about, much less submit as evidence, specific potential biomarkers for violence and antisocial behavior based on either brain imaging or genetic studies. At a bare minimum, we must carefully study the sensitivity and specificity of these potential biomarkers before we even consider permitting such evidence to influence judgments about individuals.
Human behavior is exquisitely complex and often counterintuitive. It would be folly to think that serious examination of the causes of behavior and its pathological variants would yield simple explanations. Unfortunately, the sophistication of our technologies can too often incite overconfidence in the explanatory power of a given bit of neuroscience datum. This is not merely an academic quibble, a bit of ivory-tower contrarianism. Indeed, recent work has shown that fMRI images can possess a “seductive allure” for jurors. Mere exposure to colorful neuroimaging evidence can enhance the credibility of otherwise implausible explanations of individual behavior. When presented to judges in mock trials, fMRI and DNA evidence has a powerful effect on punishment decisions, affirming the authority commanded by biology in the courts. Overly simplistic explanations of human behavior based in neuroscientific data are far too easily taken on face value; when applied to individuals, such explanations are fatally flawed.
Science, the great and final arbiter of truth, can and should be used to promote justice. However, we are starting to ask questions of neuroscience data that these data cannot reasonably answer. As a result, lives and freedom may be decided on the basis of scientific evidence, that, while cutting edge, has as much power to explain and predict the actions of an individual as a deck of tarot cards.
Joshua W. Buckholtz is an Assistant Professor of Psychology at Harvard University, where he directs the Systems Neuroscience of Psychopathology laboratory (SNPlab)