The use of quality metrics in health care: primum non nocere and the law of unintended consequences







Related article, Page 259



Quality and the law of unintended consequences


“Healthcare in the United States is not as safe as it should be….” With these words, the Institute of Medicine launched their wake-up call to the US health care industry regarding the need for improvement in the quality of the care that we provide to our patients. Since this publication, government organizations, professional organizations, hospitals, and health care providers have worked diligently to increase the quality of the product that we provide to our patients on a daily basis. In an effort to increase the pace of quality improvement, the Affordable Care Act included 3 value-based purchasing programs: the hospital value-based purchasing program, the hospital readmissions reduction program (HRRP), and the hospital-acquired condition (HAC) reduction program. Each of these programs utilize financial incentives and penalties that are based on the performance of health care organizations on predetermined quality metrics. As a further stimulus to improve, hospital performance on these quality metrics are publically reported on the Hospital Compare World Wide Web site ( http://www.medicare.gov/hospitalcompare ).


In fiscal year 2013, as a result of data obtained by these 3 collective value-based purchasing programs, Centers for Medicare and Medicaid Services (CMS) changes to the inpatient prospective payment system payments to hospitals resulted in a “redistribution of almost 1 billion dollars among hospitals.” These payments are a redistribution of money because most of these programs are budget neutral, which is accomplished by removing money from payments to poorly performing hospitals and increasing the payments to high performers. With this amount of money involved, it is incumbent on each hospital and physician to have an understanding of the basics of these programs.




  • Hospital value-based purchasing program: Hospital performance is assessed on a set of quality measures separated into 5 domains: process of care, patient safety and outcomes, mortality, patient experience of care, and efficiency. “Hospital performance on each measure is scored taking into account achievement relative to a predetermined standard as well as a hospital’s improvement compared with a prior period.”



  • HRRP: Hospitals determined as having an “excessive rate of preventable readmissions” can be penalized up to 3% of their Medicare payments. Currently, the program includes readmissions for heart attack, heart failure, pneumonia, hip arthroplasty, knee arthroplasty, and chronic obstructive pulmonary disease. In fiscal year 2015, “about three-fourths of the 3478 hospitals for which an HRRP adjustment was reported by the CMS were penalized.”



  • HAC reduction: This program measures hospital outcomes on multiple metrics mainly related to infection rates and patient safety measures. The unique component of this program is that the “law requires that hospitals with scores in the worst performing quartile receive a 1% point reduction on their total inpatient prospective payment system payments.”



These programs and initiatives have been successful in “improving hospital performance on the various program metrics.” However, many authors are beginning to question the validity, accuracy, and value of the performance metrics utilized in these programs and other quality initiatives. With the significant effect that these programs and the metrics that they utilize have on health care organizations and the patients who they serve, it is imperative that the quality metrics and measures utilized are both appropriate and accurate. In this issue of the Journal, Morgan et al report on their review of surgical site infections (SSI) in patients undergoing abdominal hysterectomy in the Michigan Surgical Quality Collaborative. SSI rates after abdominal hysterectomy will be a metric in the HAC reduction program. Their study identified 2 significant flaws with this metric. First, hospitals in the bottom quartile, which would receive a 1% penalty under the HAC program, were not statistically significantly different from those programs above the bottom quartile in their rate of infection. Therefore, these hospitals could be financially penalized despite the fact that their difference in rate of infections could be simply related to chance. Second, after application of risk adjustment based on evidence-based risk factors for SSI, such as body mass index >30 and cancer, 20% of programs changed quartiles, which would alter the hospitals penalized under this program. Without risk adjustment, these hospitals, and potentially their patients, would be harmed by this program.


Health policy experts have previously warned about the potential for health outcome metrics to be inaccurate secondary to a failure to appropriately risk adjust. Gilman et al wrote “using health outcomes as a metric of value is…potentially problematic because severity of illness and social challenges that affect health management might not be fully captured in risk adjustment models.” As a result of this potential bias, their research found that “safety net hospitals were at greater risk of receiving reduced payments than other hospitals” and “were also less likely than other hospitals to be receiving bonus payments.” The potential negative financial effects of inaccurate measurements as a result of failure to risk adjust for social and demographic characteristics could potentially result in a hospital being penalized twice since these factors can also play a significant role in hospital readmission rates (HRRP).


Limitations of current quality measures are not only limited to failure to appropriately risk adjust, but also by concerns that they do not appropriately measure the problem. Calderon et al describe concerns about the accuracy of the catheter-associated urinary tract infection (CAUTI) measure. Currently, almost one third of the CMS HAC reduction program penalty is based on the Centers for Disease Control and Prevention (CDC) CAUTI metric. This metric is based on self-reported data and is standardized by dividing the total number of infections by total catheter days divided by 1000. In contrast, the Agency for Healthcare Research and Quality (AHRQ) CAUTI metric utilizes nurse reporting of randomly selected cases using a standardized reporting instrument and the resulting data are standardized by dividing the total number of infections by hospital discharges divided by 1000. From 2009 through 2013, the CDC metric demonstrated a 5% increase in the rate of CAUTI while the AHRQ metric found a 28% decrease in the incidence of CAUTI. The authors question the validity of this metric given the wide variability in the results. Furthermore, the authors expressed concern that utilizing catheter days as the denominator provides an incentive to use catheters longer than necessary, which is in opposition with the primary quality goal for which the metric was designed.


An additional limitation of current quality measures can be that they measure events that are to some degree out of the control of the health care organization or provider. For example, rates of the third- and fourth-degree perineal lacerations have been utilized as a patient safety indicator by AHRQ and the Joint Commission includes the rate of these complications in their Pregnancy and Related Conditions Core Measures. However, the National Quality Forum withdrew their support for this measure “citing concerns around unreliable data… a majority of risk factors not being amenable to prevention, and no interval change in laceration rates after 2003, when laceration rates were adopted as a quality measure.” Furthermore, in a review of data from the Nationwide Inpatient Sample, Friedman et al found that the “large majority of hospitals in our analysis had adjusted laceration rates that were statistically indistinguishable when including 95% confidence intervals, precluding meaningful comparisons between different institutions.”


The goal of this editorial is to familiarize the reader regarding the 3 current CMS programs aimed at improving the quality and efficiency of US health care and to describe some of the concerns surrounding 3 of the currently utilized health care quality metrics in these programs. It is not the goal of this editorial to criticize the tremendous efforts currently in place to improve the quality of the health care that we provide, nor is it to criticize the use of quality metrics. Goals are not truly achievable or tangible unless we define objective measures for meeting them. However, it is imperative that in our zeal to improve we choose metrics that are accurate, are fair, and achieve their intended goal without significant unintended consequences. The law of unintended consequences holds that actions of people always have effects that are unanticipated or unintended. Economist Rob Norton wrote “Economists and social scientists have heeded its (law of unintended consequences) power for centuries; for just as long, politicians and popular opinion have largely ignored it.” Defining and measuring quality is an extremely challenging proposition and critically important for the future of health care. Physicians, rather than politicians or public opinion, must be at the forefront in selecting, validating, and implementing future quality metrics. Physician training is steeped in the concepts of scientific validity and primum non nocere: first do no harm. Therefore, we must create, study, and advocate for the use of appropriate quality metrics. Improving the quality of the care that we provide to our patients is too important.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

May 4, 2017 | Posted by in GYNECOLOGY | Comments Off on The use of quality metrics in health care: primum non nocere and the law of unintended consequences

Full access? Get Clinical Tree

Get Clinical Tree app for offline access