News

Incentives Are Not Improving Care, Expert Says


 

WASHINGTON — The few studies that have examined the effectiveness of incentivized pay-for-performance programs have found a mix of moderate to no improvement in quality measures, which, in some instances, have led to unintended consequences, Dr. Daniel B. Mark said at the annual meeting of the Heart Failure Society of America.

There are more than 100 reward or incentive programs that have started in the private U.S. health care sector under the control of employer groups or managed care organizations, said Dr. Mark, but congressionally authorized programs by the Centers for Medicare and Medicaid Services have received the most attention.

It is important to examine the evidence base that pay-for-performance programs actually improve quality because “people are making this association,” said Dr. Mark, director of the Outcomes Research and Assessment Group at the Duke (University) Clinical Research Institute, Durham, N.C.

During the last 20 years, incentivized performance programs have shown “what you measure generally improves and what gets measured is generally what's easiest to measure. But the ease of measurement does not necessarily define the importance of the measurement.” Furthermore, very little, if anything, is known about whether these initiatives are cost effective for the health care system at large, Dr. Mark said, although he noted that that may be an oversimplification of the outcomes of such programs.

A systematic overview of 17 studies published during 1980–2005 on pay-for-performance programs found that 1 of 2 studies on system-level incentives had a positive result in which all performance measures improved. In nine studies of incentive programs aimed at the provider group level, seven had partially positive or fully positive results but had “quite small” effect sizes. Positive or partially-positive results were seen in five of six programs at the physician level (Ann. Int. Med. 2006;145:265–72).

Nine of the studies were randomized and controlled, but eight of these had a sample size of fewer than 100 physicians or groups; the other study had fewer than 200 groups. “If these had been clinical trials, they would have all been considered extremely underpowered and preliminary,” Dr. Mark said.

Programs in four studies seemed to have created unintended consequences, including “gaming the baseline level of illness,” avoiding sicker patients, and an improvement in documentation in immunization studies without any actual change in the number of immunizations given or effect on care. The studies did not include any information on the optimal duration of these programs or whether or not their effect persisted after the program was terminated. Only one study had a preliminary examination of the cost-effectiveness of a program.

Another study compared patients with acute non-ST-elevation myocardial infarction in 57 hospitals that participated in CMs' Hospital Quality Incentive Demonstration and 113 control hospitals that did not participate in the program to determine if a pay-for-performance strategy produced better quality of care. There was “very little evidence that there was any intervention effect,” said Dr. Mark. Measures that were not incentivized by CMS also did not appear to change (JAMA 2007;297:2373-80).

Recommended Reading

New P4P Hospital Project Is Premier's Latest Quest
MDedge Family Medicine
Policy & Practice
MDedge Family Medicine
Alzheimer Disease Onslaught Requires Action Now, Some Say
MDedge Family Medicine
Going Beyond the Disease
MDedge Family Medicine
Simplicity Is Key in Cutting Wait Times
MDedge Family Medicine
Self-Referral Rule Heralds A Return to Earlier Policy
MDedge Family Medicine
UnitedHealthcare Agrees to $20 Million Settlement in Claims Processing Case
MDedge Family Medicine
Policy & Practice
MDedge Family Medicine
Medical Equipment Program Aims to Cut Costs
MDedge Family Medicine
Formal Programs Don't Change Abstinence Rates
MDedge Family Medicine