• Ideas
  • Science

The Idea of a ‘DNA Test’ for Transgender People Is Part of a Long, Dark History

8 minute read
Ideas
Jeffrey Kluger is an editor at large at TIME. He covers space, climate, and science. He is the author of 12 books, including Apollo 13, which served as the basis for the 1995 film, and was nominated for an Emmy Award for TIME's series A Year in Space.

Science is too busy holding the universe together to care about politics. Greenhouse gases will continue contributing to climate change and vaccines will continue not causing autism, whether you believe it or not.

None of that means that politicians and ideologues won’t keep trying to drag science into their fights. The latest example of this unlovely truth occurred just this week, when the New York Times reported the existence of a memo drafted by the Department of Health and Human Services that would, if the changes it details were implemented, effectively roll back civil protections for transgender people by defining gender as a fixed biological trait, determined by genitalia at birth. The only exception, the memo reportedly states, would be if the male or female label could be “rebutted by reliable genetic evidence.”

But, as decades of scientific research have shown, that kind of test does not remotely exist. It likely never will.

It’s easy enough to determine basic physical sex with a quick scan for X and Y chromosomes. That can also determine some gender disorders, most commonly XXY—or a person who is superficially male, but carries an extra X, or female, chromosome. The condition, known as Klinefelter syndrome, can result in a range of problems, including delayed or incomplete puberty, comparatively weak bones and undescended testicles. X and Y chromosomes, however, tell you absolutely nothing about gender identity, which is a vastly more complicated matter.

The genes that live on the chromosomes may play some role in determining gender identity—just as they do in determining height and weight and heart disease and athleticism—but it’s a limited one. For example, identical twins share effectively identical genomes, but there are nevertheless cases in which one twin is transgender and the other is not. (It does appear to be likelier that identical twins will have matching gender identities, and the small sample group of ones who are mismatched makes it hard to establish conclusive answers.) In all transgender people, twins or not, womb environment, hormonal factors and differences in brain architecture may be involved as well.

The proposed HHS rule thus sets up a neatly circular trap—establishing a standard that can never be met and ending the conversation before it can begin. Don’t blame the policy, it’s the science talking.

Listening when science talks can actually be a very good idea when the science is being used honestly. Meteorologists aren’t kidding when they tell you a hurricane is coming, and if you don’t evacuate, that’s on you. But across history, people with a political agenda to peddle have been exceedingly dishonest, putting forth rubbish dressed up as science as a way of backing up often-pernicious ideas.

The practice has long been a staple in justifying racial bias. For example, 19th-century American anatomist Samuel George Morton was the leader of a school of thought that tied intelligence to cranial size. Morton assembled what was then the largest collection of human skulls anywhere in the world, measured their volume and concluded that white Europeans came out on top, followed, in order, by Asians, Malaysians, Native Americans and, last, Africans.

The science was shabby for all manner of reasons, not least because it depended on the then-popular but unfounded five-race model for the human species. It also failed to reckon with the fact that across and within all racial groups, brain size fluctuates widely, ranging from 1,053 to 1,499 cubic centimeters for men and 974 to 1,389 cubic centimeters for women. Then, of course, there’s the inconvenient truth that while some people may have bigger brains than others, the Neanderthals had us all beat, with brains that were up to 200 cubic centimeters bigger than ours. How’d that work out for them?

But Morton’s work had its political uses, since it fit neatly with theories of polygenism, the idea that the races emerged and evolved separately, and therefore could be unequally endowed. Charles Darwin pushed back in 1871, when The Descent of Man was published, opening the door to a scientific understanding of monogenism—the idea that we all descended from a common African ancestor. But in pre-Civil War America, Morton held sway, which came in awfully handy for people who were looking for a way to justify slavery.

IQ tests—especially the Stanford-Binet, which was first published in 1916 and has been regularly updated since—have been enlisted in service of the same intellectual myth. In fairness, IQ tests have their uses, since they do seem to provide a reasonably good measure of individual aptitude, one that remains more or less stable as people age. The problem, however, is when the tests cross cultures. Children in higher socioeconomic groups, who go to better schools and have more opportunities for enrichment like art or dance classes, perform better than less privileged kids. Those more-fortunate kids, however, are also likelier to be white, a fact that can easily be spun as a function not of income but of intellect.

No sooner was the Stanford-Binet developed, than it was misused this way. In 1916, Stanford University psychologist Lewis Terman, wrote that lower scores on IQ tests marked some groups as “feeble-minded…[a] level of intelligence that is very, very common among Spanish-Indian and Mexican families of the Southwest and also among negroes. Their dullness seems to be racial or at least inherent in the family stocks from which they came.”

Just eleven years later, the notion of feeble-mindedness, at least partly determined by IQ tests, was used as a justification for the Supreme Court’s notorious Buck v. Bell decision, which allowed forced sterilization for “insanity or imbecility,” mostly among the population of prisons or psychiatric hospitals. The Commonwealth of Virginia, wrote Justice Oliver Wendell Holmes, was justified in sterilizing “defective persons who, if now discharged, would become a menace, but, if incapable of procreating, might be discharged with safety and become self-supporting with benefit to themselves and to society…” Ultimately, an estimated 65,000 people were forcibly subjected to vasectomies or salpingectomies, the removal of the ovaries or fallopian tubes. In 2002, the Governor of Virginia formally apologized to the victims of that program.

And it’s not just race. The oldest of the faux-science biases—one that emerged even when humans lived in largely racially homogenous societies—is, of course, the one that has been leveraged against women. As far back as 4,000 years ago, cultures had begun defining “hysteria”—which is derived from the Greek word hystera, for uterus—as a mental disorder, and a uniquely female one.

In a 2012 review of the science in the journal Clinical Practice and Epidemiology in Mental Health, a group of Italian researchers wrote about the mythical Greek soothsayer Melampus, who was said to have “placated the revolt of Argo’s virgins who refused to honor the phallus . . . their behavior being taken for madness. Melampus cured these women with hellebore and then urged them to join carnally with young and strong men. They were healed and recovered their wits.” The source of the madness, according to Melampus, was a “uterus being poisoned by venomous humors, due to a lack of orgasms and ‘uterine melancholy.'”

It was an appealing prescription if you were a male soothsayer: More sex, especially the kind that properly honors the phallus, is essential for a woman’s mental health. Hippocrates, who came along later, saw little to argue with here, though in his case it was the “movement” of the uterus, not the humors, that was the source of female madness.

Things got little better in the 20th century—indeed, in some ways they got worse, as the suffrage movement rose in the U.S. and Europe. This time it was menstruation that was the problem, an infirmity that would monthly render women insufficiently rational to be trusted with the vote. What’s more, while men were said to be robustly catabolic (readily expending energy) women were anabolic (conserving energy) which meant that they would not have the vigor to participate in governance.

That kind of biological determinism was hardly left behind in the last century, as witnessed by the firestorm triggered last year when former Google engineer James Damore circulated an internal memo arguing that the gender gap in Silicon Valley was at least partly a result of factors such as exposure to “prenatal testosterone,” which made men more suited to tech work than women. Science, no surprise, has found precisely nothing to back up any of this. What it has found instead, is that when structural barriers in hiring, education and workplace inequities are removed, women—complete with uteruses and menstrual cycles—perform at least as well as men in government, business and all other relevant domains.

Ultimately Damore was fired, women got the vote, Darwin beat Morton and IQ tests began being analyzed more skeptically. The good news is, simplistic assumptions about transgender people will likely go the same way; the bad news is, as long as there is science—which means forever—there will be people willing to misuse what it teaches.

More Must-Reads from TIME

Write to Jeffrey Kluger at jeffrey.kluger@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.