Conference Coverage

Industry-funded rheumatology RCTs are higher quality


 

REPORTING FROM RWCS 2019

– Industry-funded randomized, controlled clinical trials published in the three top-rated rheumatology journals during the past 20 years are of significantly higher overall quality than the nonindustry-funded ones, Michael Putman, MD, said at the 2019 Rheumatology Winter Clinical Symposium.

Dr. Michael Putman of northwestern university, chicago Bruce Jancin/MDedge News

Dr. Michael Putman

Dr. Putman, a second-year rheumatology fellow at Northwestern University, Chicago, analyzed all randomized, controlled trials (RCTs) of pharmacotherapy featuring a comparator – either placebo or an active agent – published in 1998, 2008, and 2018 in Annals of the Rheumatic Diseases, Rheumatology, and Arthritis & Rheumatology.

His main takeaway: “Rheumatologic interventions seem to work pretty well. The mean absolute risk reduction in the trials is 17.5%, so the average number of patients who need to be treated with a rheumatologic intervention is about five. This is why it’s such a great specialty to be a part of: A lot of our patients get better.”

He created an RCT quality rating scale that captured the strength of study design, methodology, and findings based upon whether a randomized trial used a double-blind design; identified a prespecified primary outcome; and featured patient-reported outcomes, power calculations, sensitivity analysis, adjustment for multiple hypotheses, and intention-to-treat analysis. He then applied the rating scale to the 85 published RCTs in the three study years.

Of note, 84% of the trials published in 2018 were industry funded, up from 74% in 2008 and 1998.

“Industry funds the vast majority of studies. Industry studies are significantly more likely to be appropriately double blinded, report patient-reported outcome measures, use intention to treat, and they have a higher overall quality,” according to Dr. Putman.

Indeed, the industry-funded studies averaged a 66% score on his quality grading scale, compared with 45% for nonindustry-funded studies.

Utilization of most of the quality metrics remained stable over time. The exceptions: Incorporation of intent-to-treat analysis increased from 58% in 1998 to 87% in 2018, and sensitivity analysis was employed in just 5% of the trials published in 1998, compared with 37% in 2008 and 26% in 2018.

The most important change over the past 2 decades, in his view, has been the shrinking proportion of RCTs featuring an active-drug, head-to-head comparator arm. In 1998, 42% of studies featured that design; for example, comparing methotrexate to sulfasalazine. By 2018, that figure had dropped to just 13%.

“Most of our trials today compare an active compound, such an interleukin-17 inhibitor, to a placebo. I think that’s a big change in how we do things,” Dr. Putman observed. “With 84% of our studies being funded by industry, the incentives in medicine right now don’t support active comparator research. It’s harder to show a difference between two things that work than it is to show a difference between something and nothing.”

However, he’d welcome a revival of head-to-head active comparator trials.

“I’d really love to have that happen,” he said. “We have basic questions we haven’t answered yet about a lot of our basic drugs: Like in myositis, should you start with Imuran [azathioprine], CellCept [mycophenolate mofetil], or methotrexate?”

Another striking change over time has been the dwindling proportion of published trials with a statistically significant finding for the primary outcome: 79% in 1998, 46% in 2008, and 36% last year. Dr. Putman suspects the explanation lies in the steady improvement in the effectiveness of standard background therapy for many conditions, which makes it tougher to show a striking difference between the add-on study drug and add-on placebo.

“We’re a victim of our own success,” he commented.

In any event, many key secondary outcomes in the RCTs were positive, even when the primary endpoint wasn’t, according to Dr. Putman, and there was a notable dearth of completely negative clinical RCTs published in the three top journals.

“The more cynical interpretation is there’s an incredible amount of publication bias, where we’re only publishing studies that show an effect and the journals or investigators are censoring the ones that don’t. The more charitable explanation, which is probably also true, is that by the time you get to putting on an RCT you kind of think, ‘This thing works.’ You’re not testing random stuff, so your pretest probability of a drug being effective when it enters into an RCT is probably shifted toward effectiveness,” Dr. Putman speculated.

He reported having no financial conflicts regarding his study.

Recommended Reading

Psoriatic arthritis eludes early diagnosis
MDedge Rheumatology
Different disease features found with family history of psoriasis versus PsA
MDedge Rheumatology
Biologics curb coronary artery plaques in severe psoriasis
MDedge Rheumatology
Don’t miss early joint involvement in psoriasis
MDedge Rheumatology
What’s new with adalimumab? Plenty
MDedge Rheumatology
AAD, NPF release two joint guidelines on treatment, management of psoriasis
MDedge Rheumatology
Ixekizumab psoriasis outcomes, sliced and diced
MDedge Rheumatology
FDA: Safety signal emerged with higher dose of tofacitinib in RA study
MDedge Rheumatology
Golimumab plus methotrexate looks good in early psoriatic arthritis
MDedge Rheumatology
FDA approves patient-controlled injector for guselkumab
MDedge Rheumatology