By the time Cherlynn Mathias was ready to blow the whistle on Dr. Michael McGee two years ago, it had been clear for quite a while that something fishy was going on. For one thing, there were the hokey infomercials touting his experimental vaccine for malignant melanoma, a particularly nasty form of cancer, as if it were a Veg-O-Matic. Thanks to the vaccine, a patient declared onscreen, my cancer is in total remission. Then there was the sales pitch McGee delivered in person. When she and the doctor met with a prospective patient, says Mathias, who worked as his research nurse, he would come on like a used-car salesman: “‘We have the best vaccine out there,'” she remembers his saying. “‘Two-thirds of my patients have responded to the treatment.'” He was even giving the drug to his father-in-law, he would tell people; that’s how good it was.
But what they got, once they joined McGee’s clinical trial, many patients say, was a different story. According to Mathias, more than a third developed severe side effects, including uncontrollable nausea, fevers, rashes, swelling and terrible headaches. Some thought the doctor’s behavior was odd. While on the vaccine, one patient, Dawanna Robertson, discovered she was pregnant; she panicked when she recalled the warning on the consent form she had signed: “The potential effects of these drugs on the growing fetus…may include serious birth defects.” Yet when she voiced her fears, Robertson says, McGee assured her that the vaccine couldn’t pass through the placenta. She received another injection that day.
Still, Mathias says, most of McGee’s patients believed that this vaccine was their best chance for recovery from what is usually a fatal disease. That’s why they were so shocked when McGee sent them a letter that read, in part, “Patients have enrolled in this study more rapidly than originally expected…Due to this interest, the sponsor has exceeded its capacity to supply the experimental Melanoma Vaccine and is unable to provide material for further injections at this time.”
The letter was devastating enough, but Mathias knew it was a lie. The truth was that the trial had been suspended out of a growing concern among McGee’s supervisors that it may have been doing more harm than good. So after agonizing for days about what to do, she wrote a long, detailed letter to what is now called the federal Office for Human Research Protections (OHRP), describing McGee’s multiple lapses. Her letter reported that McGee had, among other things, stored the vaccine improperly, exposing it to potential contamination; failed to maintain adequate records and track its consistency from batch to batch; mislabeled vials of the stuff; and, worst of all, kept most of the data on adverse side effects secret. An investigation by the Food and Drug Administration confirmed her suspicions and ultimately revealed more than 20 separate deficiencies. “We were doing nothing right,” says Mathias. “It was a perfect lesson in how not to run a clinical trial.”
The lesson would have been a lot less shocking if McGee’s vaccine trial had been run out of a back-alley clinic or a storefront in Tijuana. In fact, the study was conducted at the St. John Medical Center in Tulsa, Okla., and co-sponsored by the respected University of Oklahoma Health Sciences Center. It had been approved by the university’s institutional review board, or IRB, a body set up to ensure that such trials meet federal standards for experimental design–including the obligation to inform participants of any safety issues. And it had been given the green light by the FDA itself.
Unusual as it is, the Oklahoma case isn’t an isolated incident–and in many ways, it isn’t even the worst. Clinical trials are usually pretty safe; the vast majority of subjects are not hurt in any way. But so many problems–and such serious problems–have surfaced in recent years that doctors and hospital administrators are starting to wonder whether there is something dangerously wrong with the clinical-trial system.
Nobody is suggesting that it be shut down completely. Clinical trials are a vital and necessary part of America’s vaunted medical research system. They are its primary mechanism for testing potential drugs and separating the ones that work from the ones that are useless or actively harmful. Yet the very nature of human testing involves risk; nobody can tell in advance whether a new medicine carries unforeseen dangers. And so clinicians are forced to walk an ethical and scientific tightrope. Make the rules protecting patients too lax, and subjects will suffer and even die needlessly. Make them too strict, and lifesaving medications won’t make it out of the lab quickly enough to help the people who need them most.
But precisely where the balance lies is a matter of serious, even bitter debate. At one extreme are those who believe that most trials are tainted because they play on the fears of desperately ill patients, involve some sort of subtle coercion like money or free medicine or fail to warn patients of the very real dangers they face. Some critics argue as well that there are simply too many trials, as pharmaceutical companies looking for a share of the blockbuster drug market pump out copycat medicines that no one really needs.
On the other side are clinicians who feel they are already burdened with too much regulatory paperwork. Tighter rules will just take time and energy away from what they should be doing–developing and testing desperately needed medications. A few mishaps today, they say, may be the price we pay to save thousands of lives tomorrow.
Still, something is clearly wrong with the system as it now operates. Over the past three years, more than 60 institutions, including several of the world’s most prestigious research centers, have been criticized by the U.S. government for failing to protect human subjects adequately. McGee’s patients were very sick, so in a sense they couldn’t be made much worse by his treatments. But federal records show that since 1999 at least four people who entered clinical trials in reasonably good health wound up dead–including two infamous cases, at Johns Hopkins Medical Center and the University of Pennsylvania.
The actual number of such deaths may be considerably higher, but nobody really knows. The monitoring of clinical research in the U.S. is so piecemeal, and the reporting of problems so haphazard, that it’s almost impossible to find out what is really happening. Thanks to a patchwork regulatory system, perhaps a quarter of all clinical research–including some studies on reconstructive surgery, dietary supplements, stem cells and infertility treatments, for example–gets no federal oversight whatsoever. And even where oversight is mandated, it’s often applied loosely, if at all.
And it can only get worse as the number of trials increases. According to CenterWatch, a patient information group that monitors clinical research, 80,000 clinical trials were conducted in the U.S. last year alone. Adil Shamoo, a bioethicist from the University of Maryland School of Medicine who sits on the National Human Research Protection Advisory Committee, estimates that some 20 million people were enrolled as research subjects last year–three times the number a decade ago.
That figure is expected to grow astronomically in the next few years, as drug companies prepare record quantities of new medicines for market and as the budget of the National Institutes of Health–the government’s primary research agency–continues to grow. Says Thomas Murray, president of the Hastings Center, a bioethics think tank: “Our human-subject-protection apparatus is simply not equipped to deal with this demand. We’re going to potentially face 100 Oklahomas, 100 Hopkinses.”
But what to do about it isn’t at all clear. Some experts favor tighter enforcement of existing rules and greater resources for the understaffed, overworked review boards that too often let shoddy research proceed. Others think patients need to be told more clearly and forcefully what the dangers and limitations of clinical trials really are. Still others are convinced that financial conflicts of interest–drug companies sponsoring trials and paying doctors–are the root of all evil. Bills are being introduced in both houses of Congress in the next few weeks that are designed to better protect research subjects, and OHRP, the main research regulatory agency, is rewriting its rules. What’s clear to nearly everyone, though, is that without uniform, federally mandated regulations, the situation will only get worse.
It’s hard to believe, but as recently as 1974 individual scientists and their financial backers could decide for themselves what constituted ethical research. Most of the time their judgment was sound, but there were plenty of appalling exceptions. In the 1950s Army doctors gave LSD to soldiers without telling them what it was. In 1963 researchers injected prisoners and terminally ill patients with live cancer cells to test their immune responses; they were told only that it was a “skin test.” In the 1950s mentally retarded children at Willowbrook, a state institution in New York, were deliberately infected with hepatitis so that scientists could work on an experimental vaccine. And in perhaps the most infamous case on record, doctors at Georgia’s Tuskegee Institute, starting in the 1930s, deliberately withheld treatment from syphilis-infected African-American men for 40 years to monitor the course of the disease.
The revelation of these and other scandals led to the National Research Act of 1974, which required institutional review boards to approve and monitor all federally funded research. The Department of Health and Human Services followed up by creating what is now called the Office for Human Research Protection, whose job was supposed to be to oversee the IRBs. But the nature of medical research has changed dramatically in the past few decades. “Back then, research tended to be a single investigator working at an academic institution conducting a small-scale clinical trial,” says Dr. Jeremy Sugarman, director of the Center for the Study of Medical Ethics and Humanities at Duke University School of Medicine. “As the medicine changed, however, the review system did not.”
Until last year, in fact, when the agency’s budget tripled, OHRP had just two full-time investigators to monitor more than 4,000 federally funded research institutions. Since 1980, the agency has audited, on average, just four sites a year. The FDA is somewhat more vigilant, making site visits to about 200 of the approximately 1,900 IRBs that oversee research on FDA-regulated products.
Meanwhile, IRBs, which are supposed to be the first line of defense against unethical or badly designed studies, are often overwhelmed by the job. At some large research universities, a single IRB must supervise more than 1,000 clinical trials at once. Indeed, a 1996 report by the General Accounting Office found that some IRBs spend only one to two minutes of review per study. Board members can’t possibly be experts in every field; most are in-house researchers whose own studies are likely to come up for review someday. Says George Annas, a critic of current U.S. laws: “Researchers tend to approve research; they know this is how the institution makes its money. They rarely deny anything.”
The financial conflicts of interest can extend not only to the institutions but also to the researchers themselves. One of the reasons Jesse Gelsinger’s death in the University of Pennsylvania’s gene-therapy trial in 1999 seemed especially scandalous was that James Wilson, the principal investigator in the study, held a 30% equity stake in Genovo, which owned the rights to license the drug Wilson was studying; the university owned 3.2% of the company. When Targeted Genetics Corp. acquired Genovo, Wilson reportedly earned $13.5 million and Penn $1.4 million.
This doesn’t mean that scientists are pushing bad drugs just to make money. Their interest is in research, and they often need the financial backing of corporate patrons just to get started. After Wilson’s financial interests in the Gelsinger case came to light, he insisted that they played no role whatsoever in his decisions, that research was his driving motivation. Yet Marcia Angell, former editor of the New England Journal of Medicine, argues that such a link tends to bias the investigator, even if the bias is unconscious. A recent study by the University of Toronto analyzed 70 studies of a controversial heart drug. The results were telling: 96% of the researchers who were supportive of the drug had ties to companies that manufactured it, and only 37% of those critical of the drug had such ties. As more and more scientists either own stock in or get funding from for-profit companies, the ones who have no industry connections are increasingly rare.
In the Oklahoma case, it may have been a nobler motive that tripped up McGee. “McGee is a good surgeon and a decent man,” maintains his whistle-blowing research nurse, Mathias. “But he became a biased investigator. He thought he had found the cure for cancer. He really wasn’t interested in running a clinical trial; he wanted to administer his drug”–even if that meant breaking the rules to get there. Although McGee said he had tested his drug properly on animals, he had not; the data he submitted in his original protocol came from animal studies of a different drug. McGee promised the FDA that he would test each lot of the new vaccine on animals for safety before injecting it into humans. The FDA’s subsequent investigation found that such testing had not been done adequately.
Then the university review board dropped the ball. Despite federal rules requiring it to conduct “continuing review” of ongoing studies, the board met just once a month, typically for an hour, and then went out to dinner. Among other things, the board approved McGee’s consent form, which contained numerous errors. McGee later got permission to add more subjects than the original 25 he had applied for. According to the OHRP investigation, 11 of McGee’s first 18 subjects didn’t meet eligibility criteria. Like most of the key oversight decisions, this one actually came directly from the IRB chair, Daniel Plunket, who often did a one-man “expedited review” without consulting the rest of the board. James Robinson, Plunket’s lawyer, insists “there is no evidence” his client “took any action on his own.”
Mathias reported substantial protocol violations to Plunket and told Dr. Harold Brooks, dean of the university’s College of Medicine in Tulsa, as well. They finally agreed to hire an outside consulting firm to audit the experiment. The finding: deficiencies “so severe that it is beyond the scope of this report to advise corrective actions.” This finally persuaded Brooks to put the trial on hold. But according to the investigation, Brooks and Plunket decided not to share the report with the IRB; instead, Plunket filed an annual report that stated, “There are no significant safety issues related to the vaccine.” That gave cover to McGee’s letter lying to patients about why the trial was being halted.
That’s when Mathias wrote her whistle-blowing letter. On the basis of its investigation, the OHRP shut down all federally funded human research at the university. The university, meanwhile, did its own digging and came to the same conclusions. It disbanded the Tulsa IRB, suspended and later fired McGee, and terminated Plunket and Brooks as well. And on July 7, 2000, it sent a new letter to McGee’s subjects. This one admitted that “in fact, the trial was closed because of possible safety concerns.”
In the wake of the trial’s cancellation, 11 families have sued McGee, Plunket, Brooks, the IRB and the university. Says Phyllis Friesner, whose husband died in McGee’s trial: “I want the universities and the hospitals to take notice. I want to change the way they do business.” Dawanna Robertson, who discovered she was pregnant while on the vaccine, has a more personal reason to be angry. “For the rest of my life,” she says, “I will wake up every day and think, ‘Is there something wrong with my daughter?'” Robertson says she blames everyone involved, “but I blame myself the most. How could I have been so dumb? I didn’t ask enough questions. I heard only what I wanted to hear.”
Yet 12 of the 94 patients, even after reading about all the problems with the vaccine, fought to be put back on it after the trial was shut down. Rosie Whisman, now 60, joined McGee’s trial in 1999 after finding a knot in the side of her groin. She had Stage IV melanoma, and doctors gave her three to six months to live. She found out about McGee after her daughter saw his ad on television. McGee did the surgery to remove the knot and put her on his vaccine. She has been cancer free ever since.
“I thank God every day for Dr. McGee,” she says. She doesn’t care, she says, that the drug wasn’t properly tested on animals before being given to her. “When you’re given three to six months to live and someone offers you something that will allow you to spend more time with your kids and grandkids, you do it. Who cares if they’ve tested it on animals? It was my only chance to live.”
A federal judge has ruled that he has no jurisdiction in the lawsuits against McGee, but attorneys are pursuing the matter in state court. Michael Atkinson, McGee’s lawyer, is trying to get the case dismissed there as well. “None of the participants in this trial suffered any real injury,” he says. “And any technical issues, they were the University of Oklahoma’s fault. They failed to give McGee adequate resources and staffing.”
In contrast to the blatant lapses in Tulsa, those at Johns Hopkins, in Baltimore, Md., were subtler and in a way more forgivable. Dr. Alkis Togias wasn’t testing a drug or pushing a treatment; he simply wanted to get at the mystery of why some people respond to airway irritation with asthma and some don’t. His idea was to have healthy volunteers breathe a chemical irritant called hexamethonium, then to monitor their reactions. His proposal went before the Bayview Medical Center IRB, one of two set up by Johns Hopkins to supervise research at its hospitals. As is usual at Bayview, the heavy lifting was done by a subcommittee, which pored through the study and asked Togias some pointed questions about the source and purity of the chemical. His answers satisfied the board, and the study was approved.
Nine months later, Ellen Roche, a Bayview employee who had volunteered for Togias’ study, was dead of respiratory failure–a direct result of having breathed hexamethonium. As in Oklahoma, the government shut down federally funded trials, and the OHRP and the hospital began their investigations. And as in Oklahoma, there were violations on several levels. For one thing, using subcommittees to pre-review applications is a violation of federal policy: minority views on a study’s safety expressed at the subcommittee level might have a harder time being heard by the full committee.
For another, there was evidence in the literature that hexamethonium might be unsafe. Togias did a search and didn’t turn that up, but after-the-fact searches using different search engines and databases did turn up references to the potential risks to humans. The FDA also raised questions about the informed-consent forms that Roche and two other subjects had signed. On them, hexamethonium is referred to as a “medication” and as “[having] been used as an anesthetic”–giving subjects a false sense that it was an FDA-approved medicine and therefore safe. An outside review board commissioned by the hospital noted in a report, “The consent form is not meant to reassure the subject, quite the contrary, it is meant to raise every possible concern that might be relevant to the subject’s participation.”
Another criticism: Togias failed to report that his first subject (Roche was the third) had developed a cough. It went away, and Togias assumed it had to do with a viral infection making the rounds at Bayview at the time. To be safe, he added a buffer solution to the hexamethonium–but without informing the IRB, which he should have done. That omission may be a reflection of the prevailing sentiment at many hospitals: that the IRB and its review process are a bureaucratic pain in the neck, not a clinical necessity.
In the end, nobody could say that strict compliance would have saved Roche. But even the possibility haunts everyone involved. “If all things leading up to doing that study had been perfect and she had died, it still would have been a horrible event,” says Dr. Edward Miller, dean and CEO of Johns Hopkins Medicine, “but I would have felt better about the fact that we had done everything humanly possible to prevent it.”
Johns Hopkins’ response to the crisis, as in Oklahoma and at the University of Pennsylvania, was to rereview all its clinical trials and completely restructure its institutional review system; Johns Hopkins also brought in an outside IRB to evaluate all new applications until the process is complete. The University of Oklahoma, for its part, is spending hundreds of thousands of dollars to create a model system for human-subject protection–including requiring its researchers to become certified in subject safety. And last July, Penn instituted a new policy on financial conflicts of interest: any potential conflict, whether it involves funding or financial stakes in the outcome of the trial, must be reviewed by both the IRB and by a separate, university-wide panel of experts from the law school and ethics departments.
But the U.S. can’t afford to wait for every research institution to react to its own lapses. That’s the impetus for a sweeping overhaul of the OHRP, designed by director Greg Koski–with advice from a newly motivated Johns Hopkins–to make the agency more aggressive in protecting human subjects. It’s also behind legislation that will soon be moving through both houses of Congress. Representative Diana DeGette of Colorado will introduce a bill this week that is aimed at finally giving humans the same legislative protections that animals receive; the rules will apply to all research on humans, not just federally funded or FDA-regulated research. Her bill gives the OHRP more auditing responsibility and enforcement options, including the ability to punish individual researchers without shutting down an entire institution–something that office has long wanted. And her bill will lay down new rules that address financial conflicts of interest. In the Senate, Edward Kennedy plans to hold a hearing next week at which Mathias, among others, will testify, and will introduce his own bill soon after.
Both DeGette and Kennedy endorse the idea of accrediting IRBs. But they are split on whether accreditation should be mandatory. So far, Kennedy is saying yes. DeGette, who has championed patient protection in part because the University of Colorado was severely sanctioned by OHRP in 1999, thinks a voluntary system would, paradoxically, protect patients better. “The whole point of accreditation,” says DeGette, “is to encourage research institutions to reach for a higher bar, to go above and beyond the minimum requirements.”
Critics of the current system have all sorts of ideas about how hospitals and research labs could go beyond the minimum. One is the notion of subject advocates–independent consultants whose sole function would be to look out for the best interests of the subjects, not the scientists. Argues George Annas: “You cannot rely on the conscience of the individual investigator, because he has an inherent conflict of interest. He has to enroll subjects in his study.”
Many critics also point to the consent forms people sign when they join a clinical trial. Even when the risks are clearly spelled out–and they frequently aren’t–patients tend to misunderstand what’s actually going on. The truth is that less than 5% of subjects in Phase I trials, which measure the toxicity of a new drug, will receive any health benefit whatsoever. Yet when a 1995 University of Chicago study quizzed patients about why they enrolled in their Phase I cancer trials, fully 85% answered, “Possible therapeutic benefit.”
In another study of cancer patients, at Harvard Medical School, nearly 75% of the subjects did not understand that the trial was investigating a treatment that was not standard. Two-thirds said they did not know they might face additional pain or discomfort. Says Annas: “These trials involve a great deal of mutual self-deception. Patients really want to believe it’s treatment, and doctors really want to believe they are curing somebody.”
Insisting that patients be given the unvarnished truth about clinical trials might scare many away. But that doesn’t bother Alan Milstein, an attorney who has represented Jesse Gelsinger’s family, as well as many of the participants in McGee’s study. “The biggest myth out there,” Milstein says, “is that every one of these studies is essential to the advancement of medicine. That’s just nonsense. Most have to do with the advancement of the researcher himself.” If it were just a lawyer talking, that sentiment might be easy to dismiss. But Marcia Angell expresses a similar criticism: “We have floods of me-too drugs,” she complains. “So much research is trivial duplication.”
But no matter what regulations, what standardized forms, what oversight committees the government and institutions end up putting in place, in the end, it’s still up to the researcher to treat his subjects with dignity and care. In fact, argues University of Virginia bioethicist Jonathan Moreno, there’s a downside to stronger protectionism: “When you take away the discretion of scientists, it’s possible they’ll shrug their shoulders and say, ‘Protecting subjects isn’t my job. Someone else will have to take care of it.’ And if we don’t have a morally responsible community of investigators, then nothing we do will make a difference.”
That said, everyone involved in an experiment has the duty to put the subjects’ interests first. Says Duke’s Sugarman: “The moral responsibility for the protection of patients lies with the investigator, the sponsors, the people who carry out the research: nurses, assistants, technicians, research pharmacists. You can’t just say, ‘The IRB said it was O.K.'”
Luckily for many of the patients in Dr. Michael McGee’s vaccine trial, Cherlynn Mathias already had that attitude–though she has paid a price for her action. Facing nasty criticism from many of her colleagues at the University of Oklahoma and worried she would not be able to get decent medical treatment in Tulsa, she finally moved to Texas last August. Sometimes she wonders whether she did the right thing by sending that letter to the OHRP, but she is proud of the way the university responded.
She doesn’t say the same for her former boss, though. “What hurt me so much was that McGee allowed some of the patients to believe that the only reason they were dying was that they had been forced to stop taking his drug. Many of them died,” she says, “believing I was responsible for their deaths.” The truth is that she was responsible only for shattering their illusions. –With reporting by Alice Park/Baltimore
More Must-Reads from TIME
- Introducing the 2024 TIME100 Next
- Sabrina Carpenter Has Waited Her Whole Life for This
- What Lies Ahead for the Middle East
- Why It's So Hard to Quit Vaping
- Jeremy Strong on Taking a Risk With a New Film About Trump
- Our Guide to Voting in the 2024 Election
- The 10 Races That Will Determine Control of the Senate
- Column: How My Shame Became My Strength
Contact us at letters@time.com