Bn 1982, when I began my career as a family physician in small town Boston, I was convinced that the care I would provide would be as effective as the care patients receive anywhere in the world. At the time, the death rate for Americans was lower than that of comparable countries, resulting in 128,000 fewer deaths per year. Even though health care was expensive, costing 2.3% more of our GDP than the average from 11 other wealthy countries – the rapid growth of HMOs and managed care plans promised to make our health care even more effective and efficient.
Over the next four decades, however, the opposite has happened. The same age-adjusted death rate has improved so much in comparable countries that in 2017, an excess of 478,000 Americans were die every year. This translates to 1,300 additional deaths per day, which is equivalent to three jumbo jets crashing each day. The daily poor health of Americans and the failure of our healthcare system to mitigate preventable deaths amounts to a crisis that even eclipses the COVID-19 pandemic. And our overspending reached 6.8% of GDP, or $1.5 trillion a year.
This raises a key question: Why did so many smart, well-trained doctors sit idly by as American health care descended into a state of profound dysfunction?
The answer lies in the gradual, almost invisible commercial takeover of the medical “knowledge” that doctors are trained to trust.
This transition began in the 1970s, when the acceptance rate for National Institutes of Health funding applications declined from about half of medical research applications to one-third. Then, in 1981, President Ronald Reagan cut government support for academic medical research, pushing academic researchers even further into the waiting arms of industry, especially pharmaceutical companies. Following passage of the Universities and Small Business Patent Procedures Act in 1980, non-profit institutions and their researchers were allowed to benefit financially from discoveries made while conducting federally funded research. .
Former President of Harvard University Derek Bok expressed concern on the growth of commercial activities within academia: “Making money in the world of commerce often comes with a Faustian bargain in which universities must compromise their core values – and thereby risk their souls. ..”
The biggest change, however, was yet to come.
Over the past few decades, pharmaceutical companies have taken over most of our clinical research. In 1991, Academic Medical Centers (AMCs) – hospitals that train doctors and conduct medical research – received 80% of the money the industry spent to fund clinical trials. Pharmaceutical companies have relied on academic researchers for their expertise in study design, patient recruitment and data analysis. This arrangement allowed academics to receive the funding they needed while preserving much of their independence. But by 2004, the percentage of commercially funded clinical trials conducted by AMCs had fallen from 80 to just 26%.
A look at research contracts between corporations (primarily big pharma) and academic medical centers shows that 80% allowed the commercial funder to own, and therefore control, research data. jointly conducted research. Additionally, half of research contracts between pharmaceutical companies and academic institutions — the partnerships most likely to meet rigorous research standards — have allowed industry insiders to write shadow clinical trial reports for publication in scientific journals, relegating named authors to the position of “suggesting” revisions.
Still, extensive peer review ensures these reports are accurate, right? Bad. Unbeknownst to almost all physicians, peer reviewers are unauthorized access to the underlying data that serve as the basis for the conclusions reported. Pharmaceutical companies own this data and keep it confidential as “company property”. Reviewers should rely on brief data summaries included in submitted manuscripts. Peer reviewers of even the most prestigious medical journals cannot attest to the accuracy and completeness of the articles they review.
This sham was exposed in 2005 when the editors of an article published in the New England Journal of Medicine admitted they had not seen relevant data from a clinical trial involving Merck’s arthritis drug Vioxx. Five years earlier, the article had touted the drug’s safety even though neither the editors nor the peer reviewers had access to the underlying data, which showed that three heart attacks in Vioxx-treated patients did not had not been reported. If this data had been properly disclosed and analyzed when the manuscript was first submitted, the article would have shown that Vioxx significantly multiplied the risk of heart attack compared to over-the-counter naproxen (Aleve). And many of the approximately 30,000 Americans who died from taking Vioxx after the incomplete article was published would not have been exposed to the drug.
To this day, big pharmaceutical companies are still unwilling to disclose their underlying clinical trial data. The most recent example involved Pfizer Covid-19 vaccine. In September 2021, a month after the vaccine was fully approved by the U.S. Food and Drug Administration (FDA), a group of medical researchers and scientists sued the agency for releasing 451,000 pages of scientific papers. that it had evaluated before granting full approval of the vaccine. Even though the agency only needed 108 days to sufficiently assess those documents before granting formal approval for the vaccine, the FDA (with Pfizer wanting to join the lawsuit), argued that the quicker qu ‘they could release the data was five hundred pages a month, which means it would take seventy-five years before the publication of the documents in full. On January 6, 2022, U.S. District Judge Mark Pittman ruled that the FDA must release 55,000 (not 500) pages of the documents each month until they are complete.
Just to be clear that I’m a strong advocate for vaccination and boosting (especially for those ages 65 and older), CDC’s analysis of real-world data shows that as of last December, unvaccinated adults were 41 times more likely to die from COVID -19 compared to fully vaccinated and boosted adults. But I believe equally strongly that physicians and the public should have access to the underlying clinical trial data on which FDA approval is based now, not seventy-five years from now.
The lack of transparency of clinical trial data in peer review is similar around the world. But the effect is much greater in the United States because of our unique drug policy. We don’t have a formal evaluation that compares the medical benefits and economic value of new drugs to older therapies, so healthcare professionals don’t have access to this critically important information.
Federally funded clinical practice guidelines are not allowed to include the relative cost of therapies in their recommendations, which means that the possibility of a drug unnecessarily bankrupting patients or inflates the cost of health insurance. In addition, the price of brand name drugs is not regulated in this country, which is why they cost 3.5 times more in the United States than in other OECD countries. And unregulated prices increase the risk-reward ratio for overly aggressive marketing practices in the United States
Industry control over what doctors think about optimal treatments explains why expensive new drugs are used more freely in the United States than in other countries. Without access to actual clinical trial data, medical journals publish unverified articles that doctors then rely on to treat their patients. Although prescription drugs make up “only” 17% of US healthcare spending, it’s become a “tail-wagging dog” situation: Pharmaceutical companies control the “knowledge” that informs decisions doctors clinics. This leads to soaring pharmaceutical profits and crippling healthcare costs, while doctors have no way of knowing which therapies are most effective or efficient. Americans deserve better.
More Must-Try Stories from TIME