Psychologist logo

Socially Valued, Not Inherently Valuable

An extract from The Genetic Lottery: Why DNA Matters for Social Equality by Kathryn Paige Harden, with kind permission from the publisher Princeton University Press.

24 November 2021

The tendency to see intelligence (as measured on standardized IQ tests) and educational success, perhaps more than any other human phenotypes, in terms of a hierarchy of inferior and superior persons is not an accident. It is an idea that was deliberately crafted and disseminated. As the historian Daniel Kevles summarized, “Eugenicists [in the early twentieth century] identified human worth with the qualities they presumed themselves to possess—the sort that facilitated passage through schools, universities, and professional training.”5 And this equation is most clearly on display in the history of intelligence testing. 

The first intelligence tests were created by a pair of psychologists, Alfred Binet and Theodore Simon, who had been tasked by the French government with developing a means to identify children who were struggling in school and needed additional assistance. The resulting Binet-Simon scale asked children to do a series of practical and academic tasks that were typical of everyday life. An eight-year-old child was asked, for instance, to count money, name four colors, count backward, and write down dictated text. 

The key advances of the Binet-Simon scale didn’t lie in which specific tasks they asked of children, but rather in two innovations. First, the same tasks were asked of everyone (standardization). Second, the same tasks were administered to a large number of children, permitting statements to be made about how the average child of a certain age performed and how the performance of any one child compared to that age-graded average (norming).

Any parent who has ever consulted a growth chart to see whether their child is gaining enough weight, or who has ever asked a teacher whether their child’s reading is keeping up with the rest of the class, will immediately grasp the power of norming. You can look around at your friends’ children, or try to recall what your older children were like at that age, but you don’t really know—what is the typical weight for an 18-month-old? How many words can the average 6-year-old sight-read? A properly constructed set of norms won’t tell you why a child isn’t gaining weight or is struggling to read. Norms for one set of tasks won’t tell you whether there are other socially valued skills that aren’t being measured. But norms will give you some comparative data that is grounded in something other than people’s subjective intuitions about what children can and can’t do. 

Tragically, the Binet-Simon scale was nearly immediately appropriated as a quantitative metric that justified the inegalitarianism that already characterized American society. Psychologists discovered some things about measurement: if you ask children to perform a finite number of tasks, older children can do more things than younger children; children differ in the rate at which their performance on those tasks improves; and differences in performance on a small number of tasks can be informative about which children will struggle with a much broader set of learning tasks they face in their lives. And then psychologists invented another idea: that performance on those tasks could be used to tell you which people were better than other people. 

In 1908, the American psychologist Henry Goddard imported the Binet-Simon tests from France to the United States, translating them to English and using them to test thousands of children. Goddard published the results in a 1914 book, Feeble-Mindedness: Its Causes and Consequences.6 In it, Goddard alleged that the so-called “feebleminded” were physically distinct: “There is an incoordination of their movements and a certain coarseness of features which do not make them attractive, but in many ways suggest the savage.” 

More damningly, people with low scores on the early intelligence tests were alleged to be deficient morally. According to Goddard, they lacked “one or the other of the factors essential to a moral life—an understanding of right and wrong, and the power of control.” At the same time, “the folly, the crudity” of immoral behavior, including all forms of “intemperance and . . . social evil,” were considered “indication[s] of an intellectual trait.” Combining intellectual, physical, and moral deficits, the overall picture that Goddard painted of “feeblemindedness” was appalling in its dehumanization: the “feebleminded” man or woman was “a more primitive form of humanity,” a “crude, coarse form of the human organism,” “a vigorous animal.” 

In this way, Goddard and his contemporaries positioned intelligence test scores as a numerical referendum on one’s human value. People with low scores were “primitive” humans, animal-like in their physical savagery and lack of moral responsibility. As the historian Nathanial Comfort summarized, “IQ became a measure not of what you do, but of who you are—a score for one’s inherent worth as a person.”7 It was this concept—not “How many questions do you get right on a standardized intelligence test?” but “How primitive is your humanity?”—that was then attached to ideas about heredity and genetic difference. 

As a clinical psychologist who has overseen the administration of literally thousands of IQ tests, I found reading Goddard’s book a deeply uncomfortable experience. Goddard was one of the founders of American psychology, a group that transformed the field into an experimental science rather than a subfield of philosophy. He helped to draft the first law mandating the availability of special education services in public schools. Atkins v. Virginia, the 2002 Supreme Court decision that found that people with intellectual disabilities should not be subject to the death penalty, would have been cheered by Goddard, who was the first person to give legal testimony that people with low intelligence had reduced criminal culpability. Anyone working as a forensic or clinical or school psychologist today is working in a field that Goddard helped to create (just as anyone doing any statistical analysis is inescapably indebted to Galton, Pearson, and Fisher). Yet Goddard worked deliberately to establish what I consider an abhorrent idea—that intelligence test scores are a measure of someone’s worth. 

Fast-forward a century, and the idea that intelligence test scores could be used as a referendum on someone’s very humanity continues to haunt any conversation about them. In 2014, for instance, the writer Ta-Nehisi Coates, angry that people were “debating” the existence of genetically caused racial differences, made it clear that he considered questions about one’s intelligence to be inseparable from questions about one’s humanity: “Life is short. And there are more pressing—and actually interesting—questions than ‘Are you less human than me?’” Other writers responded with apparent bewilderment at Coates’s statement (e.g., “It genuinely grieves me,” wrote Andrew Sullivan).8 But such bewilderment belies willful ignorance about the history of intelligence testing. Coates’s rhetorical question—“Are you less human than me?”—was the exact question that the early proponents of intelligence testing were asking in earnest. 

No discussion of intelligence and educational success can ignore this history. In fact, given this history, multiple scholars have advocated abandoning standardized testing and the concept of “intelligence” entirely. In this view, there is no legitimate way to study intelligence, even within a racial group, because the concept of intelligence is itself an inherently racist and eugenic idea. The historian Ibram X. Kendi, in his book How to Be an Antiracist, gave a trenchant expression of this concern: “The use of standardized tests to measure aptitude and intelligence is one of the most effective racist policies ever devised to degrade Black minds and legally exclude Black bodies.”9

Thus, even if molecular genetic studies of intelligence and educational attainment are focusing their attention exclusively on understanding differences between individuals within European ancestry populations, some consider the work to still be the fruit of the poisoned tree. 

But other writers paying attention to race and racism have concluded that, despite their original intents, IQ tests are nonetheless valuable tools for understanding the effects of discriminatory policies. As Kendi himself describes, identifying racial inequity is critical to fighting what he calls “metastatic racism”: 

If we cannot identify racial inequity, then we will not be able to identify racist policies. If we cannot identify racist policies, then we cannot challenge racist policies. If we cannot challenge racist policies, then racist power’s final solution will be achieved: a world of inequity none of us can see, let alone resist.

The importance of documenting racial inequities in health outcomes like life span, obesity, and maternal mortality is obvious: How are we to close these disparities, to investigate how policies affect them, if we cannot measure them? For instance, knowing that desegregating Southern hospitals closed the Black-White gap in infant mortality and saved the lives of thousands of Black infants in the decade from 1965 to 197510 requires, at a minimum, being able to quantify infant mortality. 

Documenting racial inequities in health means documenting racial inequities in every bodily system—including the brain. And some racist policies harm the health of children by depriving them of the social and physical environmental inputs necessary for optimal brain development, or by exposing them to neurobiological toxins. 

Consider lead. In 2014, when the city of Flint, Michigan, switched the source of its drinking water supply from Lake Huron to the Flint River, Flint residents—the majority of whom are Black—immediately complained about the switch: an early story by CBS News was titled, “I don’t even let my dogs drink this water.”11 The new water supply was corrosive. As it flowed through the antiquated lead pipes of the city’s water system, lead leached into the drinking water. In areas of the city with particularly high lead levels, the percentage of children with elevated blood lead levels nearly tripled, to over 10 percent.12 Those areas with the highest exposure to lead were also the areas with the highest concentration of Black children. The confluence of factors visiting harm on these children led the Michigan Civil Rights Commission to conclude that the lead poisoning crisis was rooted in “systemic racism.”13

What tool is used to measure the neurotoxic effects of lead? IQ tests. The IQ deficits that result from lead exposure prevent researchers and policymakers from shrugging off the effects of lead as temporary or trivial. And that is just one example. In her book, A Terrible Thing to Waste, Harriet Washington documents how people of color are overwhelmingly more likely to be exposed to environmental hazards like toxic waste and air pollution. Moreover, she argues that IQ tests, by providing a numerical metric for a child’s ability to reason abstractly, are currently an irreplaceable tool for quantifying the perniciousness of what she terms “environmental racism:”14 “In today’s technological society, the species of intelligence measured by IQ [tests] is what’s deemed most germane to success. . . . IQ is too important to ignore or wish away.”15

Washington is right: the skills measured by IQ tests, while certainly only representing a fraction of possible human skills and talents, cannot be wished away as unimportant. In Western highincome countries like the United States and the UK, scores on standardized cognitive tests (including scores on the classic IQ tests, and also scores on tests used for educational selection, like the SAT or ACT, which are highly correlated with IQ test scores16) statistically predict things that we care about—including life itself. Children who scored higher on an IQ test at age 11 are more likely to be alive at age 76—and, no, that relationship cannot be explained by the social class of the child’s family.17 Students with higher SAT scores, which are correlated as highly as 0.8 with IQ,18 earn higher grades in college (especially after one corrects for that fact that good students select more-difficult majors).19 Precocious students with exceptionally high SAT scores at a young age are also more likely to earn a doctorate in a STEM field, to hold a patent, to earn tenure at a top-50 US university, and to earn a high income.20

Washington’s quest to reclaim intelligence tests as a tool to combat environmental racism mirrors the efforts of other scholars of color and feminist scholars who have argued that quantitative research tools can be used to challenge multiple forms of injustice. The feminist Ann Oakley, for example, argued that “the feminist case” for abandoning quantitative methods was “ultimately unhelpful to the goal of an emancipatory social science.”21 Similarly, Kevin Cokley and Germine Awad, my colleagues at the University of Texas, affirmed that “some of the ugliest moments in the history of psychology were the result of researchers using quantitative measures to legitimize and codify the prejudices of the day.”22 They went on to argue, however, that, “quantitative methods are not inherently oppressive,” and can, in fact, “be liberating if used by multiculturally competent researchers and scholar-activists committed to social justice.” 

Intelligence tests were positioned by eugenicists as a measure of someone’s inherent worth, with the resulting hierarchy of inferior and superior humanity conveniently ratifying the ugliest suppositions of a racist and classist society. Intelligence tests measure individual differences in cognitive functions that are broadly relevant, in our current societies, to people’s performance at school and on the job, even to how long they live. The challenge is to reject the former without denying the latter. Like a measure of a child’s speech impairments, intelligence tests don’t tell you that a person is valuable, but they do tell you about whether a person can do (some) things that are valued.

THE GENETIC LOTTERY: Why DNA Matters for Social Equality by Kathryn Paige Harden. Copyright © 2021 by Kathryn Paige Harden. Reprinted by permission of Princeton University Press.