By Maurice Chammah and Dana Goldstein | The Marshall Project | January 29, 2015
Ever since the Supreme Court ruled that prisoners suffering from “mental retardation” — a now outdated term — could not face the death penalty in the 2002 case Atkins v. Virginia, debates about whether a felon qualifies for execution have often revolved around a single number: an IQ score. On Tuesday, Georgia prisoner Warren Hill was executed for the 1990 beating death of a fellow inmate. His attorneys argued unsuccessfully that his IQ of 70 disqualified him for the punishment. This evening, Texas is set to execute Robert Ladd for beating a woman to death with a hammer in 1996. His attorney has pointed out that Ladd’s IQ of 67 would disqualify him from execution in most other states.
Last May, the Supreme Court built on the Atkins decision by ruling that Florida could not exclusively use a simple IQ cut-off when it determined who was fit for execution. “An IQ score is an approximation, not a final and infallible assessment of intellectual functioning,” Justice Anthony Kennedy wrote, demanding a more holistic approach by medical professionals. “Intellectual disability is a condition, not a number.”
But how did IQ numbers become so central in death penalty cases in the first place? And why, even after the Supreme Court challenged their usefulness, are we still hearing about them?
The roots of these questions go back more than a century. IQ — intelligence quotient — dates to 1905, when the French psychologist Alfred Binet developed the first IQ test. Binet made clear that his test was not a measure of “innate” intelligence and should be used chiefly to identify children who needed help in school. Yet American eugenicists quickly popularized IQ as a tool for identifying people supposedly predisposed to crime, promiscuity, and low achievement in school and life. Courts and state agenciessometimes ordered the sterilization of people with low IQ scores. By 1925, IQ tests were in wide use in American public elementary schools to track students toward either the vocational or academic curriculum. Early IQ exams asked questions that required cultural knowledge, such as: “The forward pass is used in: tennis, handball, chess, football (circle one).” Unsurprisingly, IQ scores were correlated with race, class, and immigration status.
By the time Robert Ladd first took an IQ exam in 1970, at age 13, the testing was more sophisticated. As part of a psychiatrist’s evaluation, ordered after Ladd committed arson, Ladd sat for the Wechsler intelligence test, which required less factual knowledge and more performance, such as verbally repeating a series of letters or numbers back to a proctor. Yet the test also included questions on vocabulary and arithmetic. That meant Ladd’s score of 67 reflected not only his innate ability, but also his exposure — or lack thereof — to educational opportunities at home and in school.
Regardless of the still-raging debate over whether IQ measures nature or nurture, a large body of late twentieth century research seemed to suggest the scores were, in fact, related to criminality. In the widely-cited 1977 paper, “Intelligence and Delinquency: A Revisionist Review,” Travis Hirschi and Michael Hindelang of the State University of New York at Albany cited a number of studies showing that IQ was a stronger predictor of juvenile delinquency than a family’s socioeconomic status. In 1985, criminologists James Q. Wilson and Richard J. Herrnstein published “Crime and Human Nature: The Definitive Study of the Causes of Crime.” Summarizing the IQ research of the period, they wrote that some criminals, such as forgers and embezzlers, tended to have higher IQs than the larger prison population, but murderers and rapists typically had low IQs. Herrnstein went on to co-author, with Charles Murray, “The Bell Curve,” the 1994 book that ignited a firestorm by resurrecting the old argument that some racial groups were less intelligent than others.
Meanwhile, more inquiries into the relationship between IQ and lawbreaking had appeared. A 1993 longitudinal study of 13-year-old boys showed that those with low IQ were more likely to commit crimes in the future, even when the researchers controlled for class, race, and motivation to do well on the exam.
These are the types of findings that predominated in the pre-Atkins era. Yet newer research complicates the notion of IQ as the most telling link between cognitive ability and crime. Measures of self-control, for example, now seem to be more reliable than either IQ or class status in predicting whether children go on to break the law as adults. The American Association of Intellectual and Developmental Disabilities emphasizes that IQ is just one measure of limited functioning and identifies other factors, including gullibility and the ability to follow directions, use money, or understand a schedule.
Nevertheless, when the Supreme Court ruled against the execution of the “mentally retarded” in 2002, many states, including Alabama, Florida, Kentucky, Virginia, and Idaho, started using IQ scores as a simple, unambiguous standard.
Even after the court told Florida last May it could not use a single numerical cut-off of 70 to prove disability, the confusion surrounding IQ and its application in death penalty cases has continued. Often IQ tests conducted by experts for the defense and prosecution produce conflicting numbers. IQ scores have a 5-point standard of error in either direction, and an individual’s score can change over time. (Hall’s score ranged between 60 and 80 over multiple tests. Although Ladd took the Wechsler exam 45 years ago, his score of 67 figured prominently in his defense team’s strategy over the past year.) What is considered an average score shifts over time as well.
Additionally, although the court has ruled against a simple numerical cut-off, defense attorneys can still use these numbers to argue — particularly in the media — against the standards in states that don’t use IQ numbers. Even after the Hall v. Florida decision, it’s up to states to decide how to ascertain intellectual disability, and many of these standards are far less friendly to defendants than IQ tests; Georgia, which executed Hill this week, demands intellectual disability be proven “beyond a reasonable doubt.” Such a standard represents a desire for objective certainty in a realm — the human mind — that is endlessly complex and even obfuscated.
And prosecutors are able to exploit that uncertainty when defendants take IQ tests after a crime has been committed, since prosecutors can argue that defendants are purposefully not doing their best, explained capital defense attorney John Blume. “It’s pretty hard to overcome the possibility that the person might be malingering.”
Texas, where Ladd is set to be executed this evening, still relies on the 2004 court decision Ex Parte Briseno, which said judges could consider a variety of factors. These include whether a defendant is able to “hide facts or lie effectively,” respond to “external stimuli” with “rational and appropriate” conduct, and “show leadership.” The court left more specific requirements up to the state legislature and made a passing reference to Lennie Small, a character from John Steinbeck’s novella “Of Mice and Men,” as someone who might be disabled but might also still deserve execution. The author’s son, Thomas Steinbeck, recently said he wasn’t happy about this, and that “the character of Lennie was never intended to be used to diagnose a medical condition like intellectual disability.”