Programmes dedicated to driving improvement in healthcare quality have grown dramatically in the last two decades. Accreditation programmes along with performance measurement and reporting have been central to these efforts. In the USA, public reporting with financial rewards and penalties has been tied to results driving a proliferation of hundreds of quality measures across dozens of programmes at every level of healthcare. Measures are now routinely included in contracts that government and commercial payers establish with delivery organisations. Many of these measures, designed to evaluate the quality of care for large populations, have been applied to measure the quality of ambulatory practice groups and even individual clinicians with little attention to the statistical validity or utility of the results.
A backlash against performance measurement has gained momentum in recent years. Clinicians and policymakers are increasingly questioning the value of such programmes. Sceptics highlight three concerns. First is the financial cost of measurement and reporting.1 In surveys of the public in the USA, approximately half of respondents identify the cost of healthcare as their top concern. Anything that adds to the cost of delivering healthcare, therefore, has the potential to deter people from seeking care.
The second concern is the burden of measurement and reporting on the healthcare workforce. In many systems, precious clinician time is diverted to the administrative tasks associated with recording, gathering, correcting and reporting the necessary data. Where measures and reporting protocols are not standardised across payers or government agencies, the administrative burden is multiplied due to staff and other resources required to sustain a variety of non-aligned programmes. Since the COVID-19 pandemic, clinician burnout has grown, making healthcare delivery systems even more vulnerable to the adverse consequences of burdening staff with administrative tasks.
Perhaps the top concern is persistent doubt about whether performance measurement drives real improvements in quality. Despite the logic and intuitive appeal of ‘managing what is measured’, a relatively small number of studies have demonstrated that use of quality measures (with or without associated financial incentives) is sufficient to produce meaningful improvement in the absence of organised care systems with capable quality improvement teams.2 3 Some observers have asked whether programmes are actually ‘measuring what matters’.4 Evidence is mixed. A systematic examination of prominent quality measures found that most had limited validity5. Yet some national reports reveal modest improvement on selected measures such as controlling high blood pressure.6
In this issue of BMJ Quality and Safety, Hesselink et al demonstrate the benefits of winnowing quality measure sets to a ‘vital few’.7 In prior work, the authors found that hospital clinicians in the Netherlands reported on a mean of 91 quality indicators with 1380 underlying variables per focus area and subsequently used a consensus process to narrow the 91 to a core set of 17 indicators for intensive care unit (ICU) quality reporting.8 In the present study, they implemented the smaller core set in seven hospital ICUs, surveying physician and nurse staff at three time points. They found a substantial reduction in the median time spent in daily documentation (nearly halving the time to 30 min daily), and reducing reported unnecessary and unreasonable administrative tasks. Notably, there were no changes in standardised mortality rates or ICU readmission rates, and the authors did not see an increase in joy in work although these are relatively broad indicators that may have been insensitive to smaller changes in quality. And of note, many clinicians persisted in collecting the data for the broader set of indicators, highlighting the well-known challenge of abandoning old practice patterns and adopting new ones to accomplish quality improvement.
While the study focused on ICU care, it offers a suggestive microcosm of what can be achieved when the number of quality measures is reduced. The findings are especially relevant in the postpandemic era when the health workforce is experiencing high levels of burnout and dissatisfaction in the face of demands for cost containment and high public expectations.
The study also suggests that we should hold our quality measurement programmes to a higher standard. If quality measures are not contributing to improvement, then programme leaders should ask why. Unfortunately, measuring quality without linking those measured results to action plans for improvement is common. Changing well-established practices is challenging. Over-reliance on busy clinicians to re-engineer systems and workflows is unlikely to produce meaningful change. In the USA, even measures linked to performance-based payment incentives often fail to show systematic or sustained improvement outside of a handful of settings.
The combination of financial cost, burden and limited effectiveness has prompted several initiatives to reduce the number of measures and focus on a meaningful vital few.9 The US Centers for Medicare and Medicaid Services recently promoted a ‘Universal Foundation’ of quality measures, identifying approximately 23 of more than 500 measures as the most meaningful and useful for alignment across the federal agencies and programmes. A focus on a trimmed set of measures should enable more standardisation of measure specifications, reduce the amount of administrative burden and better guide the efforts of clinicians toward those actions that will improve population health outcomes. In addition, performance measure developers should embrace a holistic approach, considering how selected measures can be bundled and deployed with more efficient sampling techniques to evaluate underlying performance constructs such as chronic disease care, equity, or safety.10
Reducing and prioritising measures to a ‘vital few’ is just one among many interventions that could improve quality improvement systems. Additional steps could reduce burden. Among the most powerful may be to stop relying on clinical staff to properly record (or register) the data needed for performance measurement. In many other industries, digital automation significantly reduces the cognitive burden on professionals. The location of airliners in flight is vital information to guide air traffic controllers and prevent collisions. Few pilots would find it acceptable to take time out every 3 min to manually report the coordinates of a plane in flight. In modern aviation, location reporting is automated, and data are available for analysis in near real time.
Healthcare data may be more complicated and less standardised, but several digital technology solutions are now available. For example, ambient sensors can detect and record the occurrence of actions like handwashing to prevent nosocomial infection or the frequency of turning bed-bound patients to prevent pressure ulcers. Rapidly improving large language models can create remarkably accurate narrative and structured data by ‘listening’ to clinician–patient interactions. Data generated by these advances could enable more meaningful measures. Government and industry efforts to digitally enable performance measurement are under way, creating electronic Clinical Quality Measures (eCQMs) and the next generation of digital Quality Measures (dQMs) that leverage fully standardized interoperable and exchangeable digital health data from a variety of source systems.11
Improving joy in work and reducing burnout among clinicians is vital to support quality and safety. Clinician time is precious. The study by Hesselink et al did not show improved joy in work despite a substantial decrease in time spent on documentation for quality indicators. However, the persistence of residual documentation suggests an even greater opportunity to save clinician time. Freeing clinicians from the beast and burden of quality measure documentation by reducing measures and promoting digital innovations are solid steps. Their impact could be enhanced by adding deimplementation strategies designed to actively remove outmoded tasks.12 Combining these approaches could produce a more efficient and effective quality measurement enterprise that contributes to clinician well-being while enhancing the quality impact of performance measurement.
Ethics statementsPatient consent for publicationNot applicable.
Ethics approvalNot applicable.
Comments (0)