Back in 1982, when I first began my career as a family practitioner in a small town of Boston, I was confident that the care I’d provide would be as effective as the care patients receive anywhere in the world. At the time, the death rate for Americans was lower than that of comparable countries, resulting in 128,000 fewer deaths annually. Although healthcare was expensive—costing 2.3% more of our GDP than the average of 11 other wealthy countries—the rapid growth of HMOs and managed care plans promised to make our healthcare even more effective and efficient.
Over the ensuing four decades, however, the opposite has occurred. The same age-adjusted mortality rate has improved so much more in comparable countries that, by 2017, an excess 478,000 Americans were dying each year. This translates into an extra 1,300 deaths daily, equivalent to three jumbo jets crashing every day. The everyday poor health of Americans and the inability of our healthcare system to mitigate preventable deaths amounts to a crisis that dwarfs even the COVID-19 pandemic. And our excess spending has risen to 6.8 percent of GDP, or $1.5 trillion per year.
This raises a key question: Why have so many smart, well-trained doctors stood by as American healthcare descended into a state of profound dysfunction?
The answer lies in the gradual, nearly invisible commercial takeover of the medical “knowledge” that doctors are trained to trust.
This transition started in the 1970s, when the acceptance rate of grant applications for funding from the National Institutes of Health shrank—from roughly half of medical research applications to one-third. Then, in 1981, President Ronald Reagan slashed government support of university-based medical research, further pushing academic researchers into the waiting arms of industry, especially pharmaceutical companies. Following the 1980 passage of the University and Small Business Patent Procedures Act, nonprofit institutions and their researchers were allowed to benefit financially from the discoveries made while conducting federally funded research.
Former president of Harvard University Derek Bok expressed concern about the growth of commercial activities within academia: “Making money in the world of commerce often comes with a Faustian bargain in which universities have to compromise their basic values—and thereby risk their very souls…”
The biggest shift was, however, still to come.
Over the past few decades, the drug companies have taken over most of our clinical research. In 1991, academic medical centers (AMCs)—hospitals that train doctors and conduct medical research—received 80 percent of the money that industry was spending to fund clinical trials. The drug companies relied on academic researchers for their expertise in designing studies, enrolling patients, and analyzing the data. This arrangement allowed academics to receive the funding they needed while still preserving much of their independence. But by 2004, the percentage of commercially funded clinical trials conducted by AMCs had fallen from 80 to just 26 percent.
A look at the research contracts between corporations (mostly Big Pharma companies) and academic medical centers shows that 80% allowed the commercial funder to own, and thus control, the data from jointly conducted research. Furthermore, fully half of the research contracts between drug companies and academic institutions—the partnerships with the highest likelihood of upholding rigorous research standards—allowed industry insiders to ghostwrite clinical trial reports for publication in scientific journals, relegating the named authors to the position of “suggesting” revisions.
Nonetheless thorough peer review ensures that these reports are accurate, right? Wrong. Unbeknownst to almost all doctors, peer reviewers are not granted access to the underlying data that serves as the basis for the reported findings. The drug companies own that data and keep it confidential as “corporate property.” Reviewers must rely on brief data summaries included in the submitted manuscripts. Peer reviewers at even the most prestigious medical journals cannot possibly attest to the accuracy and completeness of the articles they review.
This sham was exposed in 2005 when the editors of an article published in the New England Journal of Medicine admitted they had not seen relevant data from a clinical trial involving Merck’s arthritis drug Vioxx. Five years earlier, the article had extolled the drug’s safety even though neither the editors nor the peer reviewers had been granted access to underlying data, which showed three heart attacks that had occurred in patients treated with Vioxx were not reported. Had this data had been properly disclosed and analyzed when the manuscript was first submitted, the article would have shown that Vioxx significantly increased the risk of heart attack five-fold when compared to over-the-counter naproxen (Aleve). And many of the estimated 30,000 Americans who died as a result of taking Vioxx after the incomplete article was published would not have been exposed to the drug.
To this day, Big Pharma companies remain unwilling to disclose their underlying clinical trial data. The most recent example involved Pfizer’s COVID-19 vaccine. In September 2021, one month after the vaccine had been granted full approval by the U.S. Food and Drug Administration (FDA), a group of medical researchers and scientists sued the agency for the release of 451,000 pages of scientific documents it had evaluated prior to granting the vaccine full approval. Even though the agency required only 108 days to sufficiently evaluate these documents before granting the vaccine formal approval, the FDA (with Pfizer wanting to join the lawsuit), argued that the fastest they could release the data was five hundred pages per month, meaning that it would take seventy-five years before the documents were released in full. On January 6, 2022, U.S. District Judge Mark Pittman ruled that the FDA must release 55,000 (not 500) pages of the documents each month until complete.
I want to be clear that I’m a strong advocate of getting vaccinated and boosted (especially for people age 65 and older), the CDC’s analysis of real-world data shows that last December unvaccinated adults had 41 times the risk of dying of COVID-19 compared to fully vaccinated and boosted adults. But I believe just as strongly that doctors and the public must have access to the underlying clinical trial data that the FDA approval is based upon now—not in seventy-five years.
The lack of transparency of clinical trial data in peer review is similar around the world. But the effect is far greater in the U.S. because of our unique pharmaceutical policy. We have no formal assessment that compares the medical benefit and economic value of new drugs to older therapies, so health-care professionals do not have access to this critically important information.
Federally funded clinical practice guidelines are not allowed to include the relative cost of therapies in their recommendations, which means there is no consideration given to the chance that a drug may unnecessarily bankrupt patients or inflate the cost of health insurance. Further, the price of brand-name drugs is unregulated in this country, which is why they cost 3.5 times more in the U.S. than in other OECD countries. And unregulated prices increase the reward-to-risk ratio for overly aggressive marketing practices in the U.S.
The industry’s control over what doctors believe about optimal therapeutics explains why new, expensive drugs are used more liberally in the U.S. than other countries. Without access to the actual clinical trial data, medical journals are publishing unvetted articles that doctors then rely on to treat their patients. Although prescription drugs “only” account for 17% of U.S. health-care expenditures, this has become a “tail wags dog” situation: The drug companies control the “knowledge” that informs doctors’ clinical decisions. This leads to soaring pharmaceutical profits and crippling healthcare costs, while doctors have no way of knowing which therapies are more effective—or more efficient. Americans deserve better.
- Elliot Page: Embracing My Trans Identity Saved Me
- How Safe Is India's Railway Network?
- The 'Dopamine Detox' Is Having a Moment
- Column: How the World Must Respond to AI
- What the Debt Ceiling Deal Means for Student Loan Borrowers
- LGBTQ Reality TV Takes on a Painful Moment
- What NASA Can Teach SpaceX About Protecting the Environment
- The Best Movies of 2023 So Far