My Body, My Laboratory

11 minute read
Eben Harrell

At the Radcliffe hospital in Oxford, England, in March 2002, doctors wheeled Kevin Warwick, a professor of cybernetics at the University of Reading, into an operating theater for what has to be one of the world’s only cases of elective neurosurgery on a healthy patient. Warwick belongs to a rare breed of scientists who experiment on themselves. He had volunteered to go under the knife so surgeons could hammer a silicon chip with 100 spiked electrodes directly into his nervous system via the median nerve fibers in his forearm. The goal was to fire electrical impulses into his brain to see whether a human could learn to sense, interpret and reply to computer-generated stimuli. The operation was dangerous. Success could lead to new avenues for prosthesis development, among other applications. Failure could mean nerve damage, infection, amputation or even brain injury. The lead surgeon paused before making the first incision into Warwick’s arm.

“He asked if I was ready,” remembers Warwick, now 56. “Of course I was. I had never been so excited. When they got in, the surgeons grabbed hold of my nerves, and it felt like my hand was being electrocuted. The pain was brilliant!”

(See how man could become immortal with the advent of certain technologies.)

The chip in Warwick’s arm did what it was intended to do, picking up neural action potentials — the signals sent from the cortex when a person thinks of moving a limb but does not actually do it. That allowed Warwick to use thoughts to control an electric wheelchair and, through an Internet connection, an artificial hand back in his lab in Reading. Six weeks after Warwick was wired up, his brain learned to interpret signals sent back from the chip too; when an improvised sonar device was connected to the implant, Warwick could sense how far away an object was from his arm even while he was blindfolded.

Warwick’s work may be cutting-edge, but his method is as old as science itself. In popular culture, self-experimenters are portrayed as mad scientists attempting to turn themselves into superhuman villains; in real life, their contribution to scientific progress is immense. Self-experimenters have won Nobel Prizes and helped control diseases.

For centuries, self-experimentation was an accepted form of science. Sir Isaac Newton almost burned his cornea because he could think of no other means of understanding visual hallucinations than staring at the sun. But in recent years, the academic institutions, grant agencies and journals that have codified the scientific method have come to view self-experimentation with suspicion, worrying that it leads to bias or misleading results. Nevertheless, the practice continues among a small number of professors and doctors who see it as the last chance to prove an underfunded theory, as an act of solidarity with other study subjects. Or simply as an avenue to fame.

(See how humans heal in “The End of Ouch?”)

Self-experimentation has also found new life on the Internet. So-called self-tracking has already made lay scientists of many of us as we buy the latest exercise device or nutritional supplement and then log into forums to compare our findings with other investigators. What the practice lacks in rigor, it makes up for in zeal, not to mention the sheer number of subjects running their mini-studies. Somewhere in there, real — if ad hoc — science might occur. “To me, [self-tracking] is the future of self-experimentation,” says Seth Roberts, a professor of psychology at Tsinghua University in China, whose work led to the quirky best-selling diet book The Shangri-La Diet. The practice will continue among “normal people who are simply intent on discovering what works for them.”

A Rich Tradition

Warwick is a good example of people who choose to experiment on themselves. His first motivation was, he admits, selfish: “Pure scientific adrenalism,” he says. “The desire to follow my heroes.” At the same time, he understood the risks involved and felt that “if we were going to fry someone’s nervous system, I’d rather it be my own.”

See how to prevent illness at any age.

Those two seemingly opposing motivations — self-promotion and altruism — have long driven the practice. U.S. Army physician Walter Reed and his three junior doctors, James Carroll, Aristides Agramonte and Jesse Lazear, are probably the most mythologized self-experimenters in U.S. history for their efforts to uncover the cause of the yellow-fever outbreak that ravaged troops during the Spanish-American War. Building on the work of another physician, Stubbins Ffirth, who drank the blackened vomit of yellow-fever survivors to prove that the disease was not contagious, Reed’s doctors set out to prove that an insect bite, rather than person-to-person transmission, was how the virus spread. On a field trip to Cuba, the trio allowed mosquitoes to feast on their bodies. Carroll later wrote that the experiments went ahead because each of the doctors was “willing to take a soldier’s chance.” Lazear died during the experiment, and Carroll suffered long-term complications that led to his death at 53.

To some supporters of experimenting on oneself, such selflessness should underpin all scientific inquiry. Lawrence Altman, a physician and medical journalist, argues in his book Who Goes First? that researchers should take part in every medical trial — even if it is a large study — to avoid charges of elitism. “No man’s life is worth more than another’s,” he writes. Other proponents go further, arguing that trying things out by oneself cuts costs and speeds development. “If you succeed with yourself, then you can go on to larger trials,” says Allen Neuringer, a psychology professor at Reed College in Oregon.

(Read TIME’s cover story “Healing the Hurt.”)

That’s exactly what happened for David Pritchard, an immunologist at the University of Nottingham in Britain who was studying autoimmune disorders such as multiple sclerosis and severe allergies. From his work as a young man in Papua New Guinea, Pritchard was struck by the near total absence of such conditions in the developing world. That led him to become a proponent of the hygiene hypothesis, the idea that by successfully scrubbing out the bacteria and parasites that human bodies have evolved to battle, people in rich nations have inadvertently thrown their immune systems out of balance. That can sometimes cause the immune system to attack the body. Studies in mice suggested that infestation with hookworms can suppress autoimmune disorders. By 2004, Pritchard hoped to test whether something similar happened in humans, and there was only one way to start: by allowing a batch of worms to nest in his intestines.

The first step was to determine what would be a safe dose of hookworms for an adult. In small numbers, the parasite does not cause symptoms, but severe infestations kill 65,000 people a year and sicken hundreds of thousands more. Pritchard and his assistants randomly assigned various dosage levels to each team member, and each self-infected with the amount they drew. They learned that 10 hookworms was the upper limit for safety. Pritchard, unfortunately, was assigned a dose of 50 and spent five days in agony before receiving medical treatment. Nonetheless, the study helped secure approval and funding for a larger trial, which is ongoing, on patients with multiple sclerosis.

(See the robots helping with surgeries and checkups.)

Pritchard’s work had the support of his university, but many scientists turn to self-experimentation out of frustration. In 1982, Barry Marshall, an Australian gastroenterologist, became exasperated by his inability to convince the medical community that the common bacteria Helicobacter pylori, rather than stress, causes stomach ulcers, so he swallowed a beaker of the stuff. He developed severe colitis a few days later, but his theory gained acceptance. In 2005 he was awarded a Nobel Prize for his work.

For every scientist thinking of following Marshall’s lead, it’s worth remembering that not all roads lead to Stockholm. In 2009, Yolanda Cox, a 22-year-old pharmaceutical researcher, died after she and her physician sister injected her with an experimental drug in search of a way to slow the aging process.

See TIME’s special report “How to Live 100 Years.”

Today’s lengthy ethical approval protocol is designed to prevent such disasters, but Warwick admits that no process is fail-safe. “So many people raise so many concerns that you have to put the blinkers on at a certain point and just go for it,” he says. One of Warwick’s students, Ian Harrison, did just that. He had small magnets implanted in his fingertips in 2009. A sonar device similar to the one Warwick used was then attached to an electromagnetic coil that made the magnets vibrate depending on an object’s distance from Harrison’s hand — an experiment with obvious implications for assisting the blind. To implant the magnets, Harrison hired a body-modification artist in Britain who specializes in decorative scarification. Harrison has grown attached to the magnets and has yet to take them out, a delay that almost certainly would not have been allowed if a paid member of the public had been used for the experiment. “My friends think it’s really cool,” he says.

Going Viral

Cool is as good a description as any for the Quantified Self phenomenon, a grass-roots movement brought together by the Internet. The guru of the field is Roberts of Tsinghua University. As a graduate student in the 1970s, he decided that the best way to improve as an experimentalist was to run multiple simultaneous trials on himself. In the past 30 years, he has tracked his sleeping patterns, his response to acne remedies, the effect of his diet on his mental arithmetic and much more.

(See photos of Dr. Mehmet Oz, medicine man.)

Roberts argues that tracking allows him to tinker with dozens of studies in a year or two, something that can yield real data — even if it’s at the expense of glory. “Some self-experimenters are spared the stigma of their research being cheap and straightforward because it is noble,” he says. “But my work wasn’t noble at all.”

Still, don’t underestimate selfishness. Roberts points to Richard Bernstein, an engineer with diabetes who in 1969 developed a tool for glucose self-monitoring that led him to discover that many small, self-regulated doses of insulin spread over the day maintained better blood-sugar levels than one large daily dose. As more people begin to document their self-help projects, their combined efforts could yield other such impressive breakthroughs.

Denis Harscoat, co-organizer of the Quantified Self group in London, agrees. Workers are more productive if they complete regular, small tasks rather than an occasional large project; the same is true of do-it-yourself science, he says. At the meetings Harscoat convenes, members discuss everything from monitoring their blood pressure to which behaviors best facilitate writing a play. “You might think we are a bunch of data-crunching geeks,” he says, “but it’s good to track.”

And track the Quantified Selfers do, often aided by new products designed for them: Zeo headbands, said to monitor sleep phases; Nike plus, shoes with a distance, speed and time sensor embedded in them; Asthmapolis, which records the location, time and date of each breath so asthmatics can monitor their attacks. Every bit of data is shared in meetings so it can be considered in the aggregate.

(See 5 apps for better health.)

At some point, to be sure, quantifying leads to overload — to paralysis by analysis. Harscoat says meetings can turn into confessionals for those who have lost touch with reality. “We tell people not to track more than two or three things,” he says.

That may help, but self-experimentation undoubtedly attracts oddballs and obsessives. Warwick, for example, says his next planned experiment may involve implanting electrodes deep into his brain. That procedure scares even him: he plans to wait until he’s 60 because he isn’t ready to say goodbye to his wife and family. Still, he says, the experiment “will be fascinating, whatever happens.”

That is what draws those who experiment on themselves to the edge — a restless curiosity coupled with the possibility of doing real good. We should be grateful that there have been such folks in the past and hopeful that there will be some in the future. Just not too many, and not all at once, please.

This article originally appeared in the Feb. 28, 2011 issue of TIME.

See TIME’s health and medicine covers.

See TIME’s best pictures of 2010.

More Must-Reads from TIME

Contact us at letters@time.com