Back in 1982, when I first began my career as a family practitioner in a small town of Boston, I was confident that the care I’d provide would be as effective as the care patients receive anywhere in the world. The death rate in America was much lower than comparable countries. This resulted in 128,000 less deaths per year. Although healthcare was expensive—costing 2.3% more of our GDP than the average of 11 other wealthy countries—the rapid growth of HMOs and managed care plans promised to make our healthcare even more effective and efficient.
The opposite happened over the next four decades. Similar countries have seen their age-adjusted mortality rates rise so significantly that an additional 478,000 Americans died every year in 2017. That’s equivalent to 3 jumbo jets colliding every single day. Americans’ poor health and inability to prevent deaths from happening are causing a crisis larger than the COVID-19 pandemic. The excess expenditure has reached 6.8% of the GDP (or $1.5 trillion each year).
It raises the question, “Why have so many intelligent and well-trained doctors stood aside as American healthcare fell into profound dysfunction?”
The answer lies in the gradual, nearly invisible commercial takeover of the medical “knowledge” that doctors are trained to trust.
This transition started in the 1970s, when the acceptance rate of grant applications for funding from the National Institutes of Health shrank—from roughly half of medical research applications to one-third. In 1981, President Ronald Reagan cut government funding for university-based medical research. This further pushed academic researchers into waiting arms in industry, particularly pharmaceutical companies. The 1980 University and Small Business Patent Procedures Act was passed. Nonprofit institutions and their investigators were allowed to profit financially from discoveries that had been made during federally funded research.
Former president of Harvard University Derek Bok expressed concern about the growth of commercial activities within academia: “Making money in the world of commerce often comes with a Faustian bargain in which universities have to compromise their basic values—and thereby risk their very souls…”
However, the biggest shift had yet to occur.
In the last few decades, most clinical research has been taken over by drug companies. In 1991, academic medical centers (AMCs)—hospitals that train doctors and conduct medical research—received 80 percent of the money that industry was spending to fund clinical trials. Academic researchers were used by drug companies to design studies and enroll patients. They also analyzed the data. The arrangement enabled academics to get the funds they required while still maintaining a lot of autonomy. However, the AMCs were responsible for only 26 percent of clinical trials that are commercially funded by AMCs by 2004.
An examination of research agreements between companies (mainly Big Pharma) and academic medical centres shows that 80 percent allowed commercial funders to control the data. TogetherThe conduct of research. Furthermore, fully half of the research contracts between drug companies and academic institutions—the partnerships with the highest likelihood of upholding rigorous research standards—allowed industry insiders to ghostwrite clinical trial reports for publication in scientific journals, relegating the named authors to the position of “suggesting” revisions.
However, peer review is necessary to ensure that the reports are correct. Wrong. Unbeknownst To Most Doctors, Peer Reviewers Are Access not allowedThe underlying data is what underlies the findings. The drug companies own that data and keep it confidential as “corporate property.” Reviewers must rely on brief data summaries included in the submitted manuscripts. Even the most respected medical journals have no peer reviewers who can attest to their accuracy or completeness.
The editors of an article that was published in the New England Journal of Medicine admitted they had not seen relevant data from a clinical trial involving Merck’s arthritis drug Vioxx. Five years earlier, the article had extolled the drug’s safety even though neither the editors nor the peer reviewers had been granted access to underlying data, which showed three heart attacks that had occurred in patients treated with Vioxx were not reported. If this information had been disclosed to the editors and analyzed when the manuscript was submitted, it would have revealed that Vioxx substantially increased heart attacks’ risk five times more than over-the counter naproxen (Aleve). Many of the approximately 30,000 Americans who died from Vioxx-related heart attacks after publication of an incomplete article would have not been exposed.
Big Pharma firms refuse to divulge the clinical trial data they have used. The most recent example involved Pfizer’s COVID-19 vaccine. One month after vaccine approval was granted by the U.S. Food and Drug Administration, a group comprising medical scientists and researchers sued FDA for the publication of 451,000 pages worth of documents that it had reviewed before granting full approval to the vaccine. Even though it took the FDA only 108 days for these documents to be fully evaluated before they granted the official approval of the vaccine, Pfizer, who was also a party to the suit, argued the FDA could not release them the information faster than 5100 pages per month. That would mean that the FDA had to wait seventy five months.Jahre The documents had not been released fully before. U.S. District Judge Mark Pittman issued a ruling on January 6th, 2022 that required the FDA to release at least 55,000 pages per month (not 500), until all documents were released.
I want to be clear that I’m a strong advocate of getting vaccinated and boosted (especially for people age 65 and older), the CDC’s analysis of real-world data shows that last December unvaccinated adults had 41 times the risk of dying of COVID-19 compared to fully vaccinated and boosted adults. But I believe just as strongly that doctors and the public must have access to the underlying clinical trial data that the FDA approval is based upon now—not in seventy-five years.
All over the globe, there is a lack of transparency in clinical trial data for peer review. The U.S. has a unique pharmaceutical policy that makes the difference. There is no official assessment of the economic and medical benefits of new medicines compared to old therapies. This means that health care professionals are not able to access this vital information.
Federally funded clinical guidelines do not permit the use of relative prices of therapies. As such, patients may be unnecessarily bankrupted or their health insurance costs will go up. Brand-name drugs are also unregulated here, so they can be as expensive in America as in other OECD nations. Unregulated prices in America increase the risk-reward ratio of aggressive marketing strategies.
The industry’s control over what doctors believe about optimal therapeutics explains why new, expensive drugs are used more liberally in the U.S. than other countries. Medical journals publish unreviewed articles which doctors can use to treat patients, but they don’t have access to clinical trial data. Although prescription drugs “only” account for 17% of U.S. health-care expenditures, this has become a “tail wags dog” situation: The drug companies control the “knowledge” that informs doctors’ clinical decisions. This leads to soaring pharmaceutical profits and crippling healthcare costs, while doctors have no way of knowing which therapies are more effective—or more efficient. Americans deserve better.
Here are more must-read stories from TIME