Do they or don’t they?
It’s the question that paper after paper after paper after paper has tried to address over the past decade: does industry sponsorship of clinical trials bias the outcomes of the study? Does such sponsorship influence how doctors interpret the research findings? Does it affect healthcare professionals’ willingness to believe the results and integrate an experimental treatment into their practice?
The latest attempt to navigate the thicket of pharma funding is in this week’s New England Journal of Medicine. In a paper titled, “A Randomized Study of How Physicians Interpret Research Funding Disclosures,” Aaron S. Kesselheim and co-authors report their results of a study designed to test how scientific rigor and funding disclosure influence the credibility of a clinical trial in the eyes of doctors.
(And let’s just get the obvious out of the way now: Kesselheim et al’s study was not funded by a pharmaceutical company. The disclosure forms are available online as a PDF.)
Here is how the authors set out to determine how funding disclosure affects what doctors make of a study’s findings. First, they made up studies for three new drugs for common disorders: hyperlipidemia, diabetes, and angina. And really, the invented drug names are spectacular: lampytinib, bondaglutaraz, and provasinab. Bondaglutaraz! Fantastic.
Then, after what must have been a very enjoyable name-creating session, the authors wrote fake abstracts describing fake trials. The abstracts varied according to their level of scientific rigor (high, middle, and low for each drug) and funding status (no funding source mentioned; funded by the NIH; or funded by a pharmaceutical company with a lead author that was financially involved with the sponsoring company). The companies fake-sponsoring the studies were selected at random from the 12 pharmaceutical companies on this list.
So – three drugs, three study designs, three funding types, for a total of 27 abstracts. Then it was time for the survey, which was mailed to 503 physicians randomly selected from a total 45,398 applicable physicians listed with the American Board of Internal Medicine. The physicians were offered $50 for completing the survey in several notifications. Nonrespondents were even mailed a crisp five dollar bill, with promise of the remaining $45 upon return of the completed survey.
The physicians weren’t asked to review all 27 abstracts. Rather, each participant was asked to review three abstracts, each for a different drug. The physicians knew that these were made-up drugs and they were told to assume that the drug had recently been approved by the FDA and was covered by insurance – in other words, satisfying all the criteria that would otherwise be needed for a doctor to consider a drug safe and practical to prescribe. The levels of rigor and funding were selected at random for each of the three studies provided to each physician.
The survey asked the respondents to score each abstract according to their likeliness to prescribe the drug.
What is a rigorous clinical trial?
It’s worth pausing here to talk about what makes a clinical trial rigorous. The question at the heart of this NEJM report is: If a trial is conducted in a highly rigorous manner, will industry funding still diminish its credibility? And the accompanying, more editorial question: Should industry funding diminish the credibility of a rigorous clinical trial?
Here is what makes a clinical trial rigorous (according to the NEJM paper authors, and just generally speaking):
• The study should randomized
• The study should be double-blind; that is, neither the investigators nor the participants know who’s receiving which treatment (the experimental or the control)
• The study should have an active comparator; that is, it should not be a single-arm study, but should be comparing two or more regimens
• The dropout rate (the number of enrolled patients who leave before the study is concluded) should be less than 9%
• For these hypothetical new drugs, the study was to have a sample size (ie, total study population) of 5,322
• The enrolled patient population should accurately represent patients with the disease/condition in question. In other words, if the average age of patients with the condition in question is, say, 65, then the average age of patients in the study should not be, say, 35.
• The drug should be documented as being safe
• For the hypothetical drugs, a rigorous study was one with a follow-up of 36 months; that is, patients had been treated for at least three years with the experimental drug
Okay, so. With the three abstracts in hand, the physicians were asked to say how likely they were to prescribe the new drug (1=very unlikely; 7=very likely), and to score the methodological rigor of the trial (1=not at all rigorous; 7=very rigorous), and to score their confidence in the conclusions reached by the investigators (you guessed it: 1=not confident; 7=very confident). And participants were also asked to respond to the juicy question:
“Do you think that pharmaceutical company funding is likely to influence the outcome of scientific studies about the efficacy and safety of pharmaceuticals in favor of the drug in question.”
(Survey respondents were also asked to disclose their own financial support received in the previous year.)
Dim the lights, it’s time for the results!
Well, despite the promise of fifty bucks, only about half of the solicited physicians responded to the survey (53.5% to be exact, for a total of 269 respondents). Interestingly, about 75% of the respondents reported receiving at least one type of industry support in the past year. And they generally agreed with that question about whether industry funding can influence trial outcomes.
Also – there was a high correlation between the rigor of the abstract and the perceived rigor. In other words, the respondents correctly discerned which abstracts were for the most rigorous studies and which for the least. The confidence in the study outcome matched the level of rigor; the least rigorous studies engendered the least confidence in the results.
More interesting, there was a definite, clear association between the funding disclosure and how physicians perceived the rigor and results of a trial. When industry funding was disclosed, the trial was perceived as less rigorous compared to abstracts where no funding was disclosed. And, regardless of the design, physicians had less confidence in the results of industry-sponsored trials, even when the trial was highly rigorous (I know, the word “rigor” is getting irritating right about now. Sorry for that. My inner thesaurus is on a coffee break right now). The respondents were also less willing to prescribe drugs tested in industry-funded trials compared to drugs tested in trials with no funding listed.
• Industry-funded trials were considered less important than NIH-funded trials
• Respondents were less interested in reading the entire report of an industry-funded study than they were in reading the entire report of an NIH-funded study
• The distaste for industry-funded research was evident at all levels of scientific rigor
• US-trained physicians were less likely to say they were willing to prescribe any of the pretend drugs than were physicians who’d trained outside the US.
• Older physicians were more likely to say they’d prescribe the new drugs than were younger physicians
As the authors write in their discussion:
“…respondents downgraded the credibility of industry-funded trials, as compared with the same trials randomly characterized as having NIH funding or having no source of support listed. The magnitude of this reduction in perceived methodologic rigor was about the same as that for low-rigor trials as compared with medium-rigor trials. Physicians’ skepticism of industry-funded research affected their responses to high-rigor and low-rigor trials similarly.”
The authors see the results of their survey as problematic. They point out that the pharmaceutical industry has funded the study of many major drugs that are now clinically important, and express their concern that excessive skepticism could hinder the translation of research findings into clinical practice. They call on the pharmaceutical industry to address this thinking so that the credibility of clinical trials will be more weighted in its rigor than in how its paid for. “The methodologic rigor of a trial, not its funding disclosure, should be a primary determinant of its credibility.”
The source of skepticism
The skepticism evident in the surveys has accrued over years. There was this lawsuit, in which GlaxoSmithKline was accused of concealing data. There’s the recent $322 million fine of Merck over off-label promotion of Vioxx, just a year after the company paid a $1 billion penalty for the way it promoted rofecoxib. Abbott recently paid $1.6 billion for promotion of unapproved uses of Depakote. And then there are the many disparaging insights in The Truth About the Drug Companies.
Yet in an editorial accompanying the report of the survey results, Jeffrey Drazen, MD, points out how many investigators in NIH-sponsored studies have incentives such as academic promotion and recognition that could influence the outcomes of their trials. He writes:
“A trial’s validity should ride on the study design, the quality of data-accrual and analytic processes, and the fairness of results reporting. Ideally, these factors – not the funding source – should be the criteria for deciding the clinical utility. Patients who put themselves at risk to provide these data earn our respect for their participation; we owe them the courtesy of believing the data produced from their efforts and acting on the findings so as to benefit other patients.”
That last clause might be a bit of a stretch. Feeling beholden to the patients who enrolled in the trial as justification for believing the results of the study? I’m not sure that I’d want my doctor prescribing me a drug because he owed it to the patients who’d been part of its investigation. But the call for balance seems warranted. After all, unless some very vast change sweeps across the twisted world of medical care, industry funding of clinical trials is here to stay.
The Industry’s Influence by PLOS Blogs Network, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 4.0 International License.