Commentary

Publish or Perish; But What, When, and How?

Author and Disclosure Information

 

References

If we were to try to identify a Zeitgeist (spirit of the time) in society, one possible answer would be data. In the field of clinical research this could mean data that is collected, not collected, public, hidden from view, published, not published—the list of issues connected to data is almost endless.

In this editorial, we would like to examine clinical research data from 3 different perspectives. What happens when there is no data available? Or when only incomplete data can be accessed? Or when all of the data is in the public realm but is uncritically taken at face value?

There is currently a groundswell of opinion that the subject of transparency of clinical trial data needs to be tackled. This campaign is particularly strong in the United Kingdom where the British Medical Journal and advocacy groups like www.alltrials.net have gained prominence. Ben Goldacre, author of the recent Bad Pharma book, goes so far as to say, “The problem of missing trials is one of the greatest ethical and practical problems facing medicine today.”1

Here in the United States we also have issues with data. One study from 2009 found that the results of only 44% of trials conducted in the United States and Canada is published in the medical literature.2 However, this study was on general medicine, how are we faring in orthopedics? A study from 2011 targeted orthopedic trauma trials registered on www.clinicaltrials.gov and followed them up to see if they were published within a reasonable timeframe.3 The result? Only 43.2% of the orthopedic trauma trials studied resulted in a publication—a figure that almost exactly mirrors the findings from the general medicine study.

Data that is not released obviously skews the evidence available to us as clinicians and researchers. More insidious still is incomplete data as it gives a false picture to anyone reading the original study or to a researcher who wants to include the study in a meta-analysis. We are all aware of the difficulty of having complete patient follow-up because, ironically, we as surgeons have enabled our patients to walk away from the study. How should we best deal with these gaps in our knowledge? Some statistical techniques have been developed to deal with just this problem.

One set of researchers looked at how missing data was dealt with in an intention-to-treat analysis in orthopedic randomized clinical trials.4 They took 1 published study and recalculated the way patients on a displaced midshaft clavicular fracture trial who were lost to follow-up are handled. These researchers used the Last Observation Carried Forward technique and compared this to the original method, which was exclusion from the analysis. This change in approach changed the significance of the nonunion and overall complication results. However, the use of these various methods to deal with missing data in intention-to-treat analysis is in itself the subject of some controversy in orthopedic clinical research.5

There is more than merely anecdotal evidence that uncritical acceptance of research findings could harm patients. We are all familiar with the recent metal-on-metal hip implant controversy when promising early results were not borne out by later experience. One study, which found combined clinical and radiographic failure rates of 28% among large diameter metal-on-metal articulations in total hip arthroplasty, notes that, “adequate preclinical trials may have identified some of the shortcomings of this class of implants before the marketing and widespread use of these implants ensued.”6

Is this volte-face in the evidence released a rare occurrence? Perhaps not. A well-known review of 49 studies from 2005 found that 45 claimed the intervention was effective.7 Subsequent investigations contradicted the findings of 7 of the original studies with positive results (16%), and a further 7 of these studies (16%) reported effects stronger than those of any of the follow-up studies, studies which were larger or better controlled. The evidence for almost one-third of the positive result studies was therefore changed, either wholly or partly. Keep in mind that this figure does not take into account the 11 positive result studies which were not replicated at all.

In all of this, we have to accept that things are rarely black and white. When is the best time to release information? For example, the conclusion for a closed fracture treatment subgroup in the study to prospectively evaluate reamed intramedullary (IM) nails in tibial fractures (SPRINT) changed only after 800 patients had been enrolled. A smaller trial would have led to an incorrect conclusion for this subgroup.8 As you can see, deciding on when to release data is a delicate subject and is influenced by many factors, not least time and costs. Many contemporary clinical researchers also operate under publication pressures.9 And all of us are aware of the kudos that accrue from being first-in-manuscript authors!

Pages

Recommended Reading

Medicare doubles readmission penalties, changes observation status rules
MDedge Surgery
The Future of Medicine—Are You Prepared?
MDedge Surgery
Collaborative quality improvement projects work, expert maintains
MDedge Surgery
Medicare pay values procedures over cognitive services
MDedge Surgery
Cost of health insurance moderates, but workers pay more
MDedge Surgery
Doctors seek halt on meaningful use stage 2 penalties
MDedge Surgery
Bipartisan support is one key to getting SGR fixed, AMA president says
MDedge Surgery
Report: 90% of doctors seeing new Medicare patients
MDedge Surgery
ACO spillover effect: Lower spending for all
MDedge Surgery
Five top hospital-acquired infections cost billions
MDedge Surgery