Under the principle of non-maleficence, healthcare providers are compelled to heal and cure patients of their sickness, disease, and injuries, and do so under the ethical premise to “do no harm”.1 While additional harm should be avoided throughout every healthcare encounter, patients routinely suffer injuries originating from hospital safety lapses that could have been prevented. The prevalence of such events is significant. The US Centers for Disease Control (CDC) estimates that nearly 1.7 million hospitalized patients acquire hospital-acquired infections (HAI) a year resulting in more than 98,000 deaths and anywhere from $28 to $45 billion in annual excess costs.2,3 Some have noted that medical errors, if appropriately identified, would be ranked as the third leading cause of death.4 These authors also indicate that an average of 251,454 patients die every year in the United States from these medical errors.4 Although efforts have been made over the years, patient safety events such as medication errors, surgical mistakes, and preventable HAIs, continue to occur and cause undue harm during the care delivery process.5
In 2005, partially in response to these issues, the federal government enacted the Deficit Reduction Act (DRA). This legislation vested the Centers for Medicare and Medicaid Services (CMS), as the largest health insurer and payer in the United States, with the authority to adjust payments to hospitals based on recent historical performance to prevent hospital-acquired conditions (HAC).6 The DRA enabled CMS to address the costs of government expenditures on healthcare payments for covered beneficiaries by implementing value-based purchasing models. Through this effort, the “value” of healthcare service was emphasized, and more efforts were placed on strengthening the relationship between the quality and costs of care provided.7
In 2014, after several years of evaluation, the HAC program evolved into the Hospital-Acquired Condition Reduction Program (HACRP), which sought to reduce reimbursements to hospitals that cause injury or harm to patients from HACs, including preventable HAIs.8 For HACRP, CMS annually evaluates hospital performance by calculating Total HAC Scores, and hospitals with scores above the 75th percentile (worst-performing) receive a 1-percent payment reduction on all Medicare fee-for-service discharges for that fiscal year.8 The HAIs that CMS began to track for this potential reduction in reimbursements included CLABSI (Central Line-Associated Bloodstream Infection), CAUTI (Catheter-Associated Urinary Tract Infection), SSI (Surgical Site Infection for Abdominal Hysterectomy and Colon Procedures), MRSA (Methicillin-resistant Staphylococcus aureus) bacteremia, and CDI (Clostridioides difficile infection).9 For these HAIs to be measured by CMS, the data is based on the Centers for Disease Control and Prevention’s (CDC) evaluation of hospital performance for each HAI using the standardized infection ratio (SIR), comparing observed to predicted HAIs, with predictions determined through a risk-adjustment process based on data from the National Healthcare Safety Network (NHSN).9 Additionally, CMS PSI (Patient Safety Indicator) 90 scores were added to the HACRP, and its values are included in the production of the overall Total HAC scoring.9 CMS PSI 90 scores are a compilation of 10 different patient safety measures which enable monitoring of performance and comparative analysis to provide an indicator for patient safety at the hospital level.10
Although patients may become infected with other microorganisms and diseases during their healthcare encounter, the five infections that CMS has chosen to focus on are some of the most complex organisms to treat and cure.11 Currently, one out of every 25 patients who enter the US healthcare system acquires an infection during their care, leading to around 90,000 annual deaths.12 Additionally, by not preventing these infections from occurring, the cost of treating these infections ranges from about $1000–$50,000 per incident, depending on the type of infection and when it is caught in the disease process.13 As these dollar figures are multiplied by the number of patients with HAIs, billions of dollars continue to be wasted annually to treat these infections when they could have been prevented.12
Although CMS instituted a program focused on reducing HACs, specifically HAIs, what remains unknown is whether the impact of this financial penalty, through the potential reduction in reimbursements, has improved the quality of care within hospitals with a safer, healthier environment through the reduction of the overall number of HAIs across the country. A few researchers have investigated this question in various ways. A prior study found no evidence of the effectiveness of financial disincentives from the initial 2008 HAC program, specifically addressing CLABSI and CAUTI.14 However, the study’s analysis was completed prior to the HACRP coming into effect in 2014. Additionally, another study noted that financial penalties were not associated with significant changes in HAIs.15 Conversely, other studies noted improvements in HAIs after the 2008 HAC program through finding reductions in CLABSI and CAUTI scores in their analyses.16,17 Another study noted that the rate of decline for hospital-acquired conditions targeted by the HACRP increased significantly following the program’s announcement.18 Likewise, a later study noted a reduction in HACs in their review of HACRP data from 2010 to 2018; however, their analysis was limited to state-level (Michigan) data in their evaluation.19 These mixed findings lead us to explore whether anticipated decreases in CMS reimbursements to hospitals from the HACRP have impacted HAIs across the United States.
Study Data and Methods Data Source and Study PopulationThis study set out to evaluate whether hospitals’ decreased reimbursements from CMS were associated with changes in HAIs. The primary data sources included CMS claims and Total HAC score data for the years 2013–2019.8 The secondary data source, which provided the control variable data, was the Definitive Healthcare website. Definitive Healthcare provides a centralized database of health system data collected from publicly available datasets (state, federal, agency) and data from licensed companies.20
At the time of our analysis, CMS administered data for the HACRP for 3436 hospitals. We removed 540 facilities because of significant data missingness, leaving us a final sample of 2896 hospitals, or roughly 84% of the total number of HACRP reporting institutions. The data missingness was related to hospitals that did not have Total HAC scoring in the first and last year of the years evaluated. The reported HACRP data included the values of the Total HAC scores which are equally weighted average scores of the following measures: the individual HAI scores (CLABSI, CAUTI, SSI, MRSA, and CDI), and the CMS PSI-90 scores for each hospital analyzed. For each annual evaluation (example: 2013) in this study, the values and scores produced by CMS account for 2 years’ worth of scoring in each year’s value (example: 2013 data = (01JAN12 – 31DEC13). However, the 2020 analysis only contained 1 year’s worth of data (01JAN19 - 31DEC19) due to the beginning of the COVID-19 pandemic, which, secondarily, was the most recent data available to be analyzed at the time of this study.
VariablesBased on our difference-in-differences study design and applying a multiple linear regression model with random effects, the main independent variable is an interaction term.21 By implementing a difference-in-differences study design, this study evaluated the policy effects of two groups, those that potentially would have been penalized in 2013 and those that would not have been. The differences in outcomes of these two comparison groups will help to provide insight into HACRP’s relationship between the groups or differences.21,22 The first variable is the CMS reimbursement penalty (yes/no) which is based on expected financial penalty from the Total HAC score in 2013 and is defined by the nominal variables of whether hospitals were expected to be in the worst-performing, 4th quartile (hospitals that would receive reductions to their CMS reimbursements) or be in the performing, 1st–3rd quartiles (hospitals that would not receive a reduction in reimbursements). Hospitals in the worst-performing group would expect to be penalized in 2014, when the penalties were announced and enacted. As 2013 was the pre-program year, CMS did not produce values for whether a hospital would have been penalized that year, thus we applied the methodology of z-scoring the hospital Total HAC scores for that year and associated those hospitals into the two separate groups. The second variable accounts for the years in the study (2013–2020). Thus, the interaction term is the product of the binary penalty variable and the reporting years.
The primary dependent variables for the study were the individual HAI scores, CMS PSI-90 scores, and Total HAC scores. All were assessed on a scale of 1–10 - The lower the score, the better the score. These dependent variables are defined as the score ratings of preventable infections that patients acquired while being in the care of a hospital (CAUTI, CDI, CLABSI, MRSA, SSI), patient safety and adverse events scores (CMS PSI 90), and the Total HAC scores of those combined variables. The Total HAC scores are ultimately what designates whether a hospital received a reduction in CMS reimbursements or not. Although all DV’s were calculated in the latter years of the study, only CAUTI, CLABSI, and CMS PSI 90 score data were evaluated for 2013. SSI data were added to the CMS HACRP data in 2014, then CDI and MRSA were later added in 2015.
Control variables at the hospital and regional levels were also included in the study. The full complement of control study variables is provided in Table 1. The study incorporates multiple commonly used variables to address the variations in hospital quality attributed to a range of individual hospital characteristics including average length of stay, average number of licensed beds, case mix index, total performance scores, bed utilization rates, geographic classification (urban vs rural), academic medical center designation, and ownership (government, proprietary, or not-for-profit).
Table 1 Characteristics of 2896 Hospitals Based on Whether the Hospitals Were Penalized or Not in 2013 and 2020
The study also included some additional measures as controls to further isolate our independent variable of interest. First, the analysis included the complication/comorbid and major complication/comorbid (CC/MCC) rates for each hospital. This is a ratio of cases admitted to the hospital that include highly complex or severe conditions. Higher rates are associated with higher reimbursement, but also higher costs. Second, we included each facility’s Hospital Compare overall rating which summarizes 46 quality measures across 5 domains into a single star rating for each hospital. The higher the number of stars, the better a hospital performed. The 5 Hospital Compare domains include measures of mortality, safety, readmissions, patient experience, and timely and effective care. Lastly, we also added a more focused assessment of patient perceptions of care via inclusion of the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) Summary Star ratings. This measure is an average of each hospital’s ratings over 6 composite topics pertaining to nurse communication, doctor communication, responsiveness of the hospital staff, communication about medicines, discharge information, and care transitions. The higher the number of stars, the better the facility is perceived to have met patients’ quality expectations (Table 1).
Study Design and Statistical AnalysisFor inclusion criteria, Total HAC scores had to be available in both the 2013 and 2020 data years for each hospital. Thus, any hospital that did not have a Total HAC score in both years was removed, leading to a final sample of 2896 hospitals or 84% of the total HACRP population.
Regarding the data of the sample, the first 2 years of CMS data (2013 and 2014) contained numerical scores for each of the study variables; however, the data for the remaining years (2015–2020) only contained Winsorized z-scores for every variable. As a result, we reversed the function to produce a z-score, to convert the Winsorized z-score to an actual numerical score for comparative analysis. We used the 2013 and 2014 data to produce means and standard deviations to be used in the conversion. From this, each variable (CAUTI, CDI, Total HAC, etc.) was recoded using the means and standard deviations from previous years of each variable. Upon completing this recoding, several hospitals had converted scores that were outside the normal score range for these variables. Due to this, we recoded variables (5.6%) at their upper and lower limits reducing values over 10, to 10.0, and increased all values less than 1, to 1.0.
Multiple linear regression with random effects models were constructed to perform the difference-in-differences analysis for the Total HAC score, each HAI, and the CMS PSI 90 scores, resulting in six models. In our preliminary model using the Total HAC score as the dependent variable, we found evidence of multicollinearity among our initial set of independent variables. Thus, all variables with a variance inflation factor (VIF) higher than 10 were removed from all models. Additionally, the rationale for using the random effects model was that the variation across the Hospital IDs was assumed to be random and uncorrelated with the predictor and independent variables included in the model.23 Therefore, including the random effects in the evaluation was to account for correlation within hospitals (ie potential that the observations of subjects from the same hospital were not independent).
The difference-in-differences study design represents a quasi-experiment that utilizes longitudinal data from both treatment and control groups to derive a suitable counterfactual for estimating a causal effect.23 Typically employed to assess the impact of a specific intervention or treatment, such as the policy evaluated in this study, this method involves comparing the temporal changes in outcomes between the penalized hospitals (intervention group) and the hospitals not penalized (control group).24
ResultsThe descriptive statistics of the characteristics of the hospitals applied in the study are available in Table 1. As the current HACRP methodology inadequately addresses the variability among patients and hospitals, which includes a wide range of diagnoses, increasing the risk for hospitals that perform more surgeries due to the greater number of HACRP measures applicable to surgical patients, these hospital characteristics were applied as control variables were included in our analysis to control for risk.11 Our sample is comprised of 6.1% of academic medical centers, 15.2% are government operated, 62.1% are not-for-profit, and the highest percentage of hospitals are found in urban areas (89.4%). Our sample hospitals maintain an average of 228 beds, manage an average length of stay of 4.7 days, experience an average CC/MCC rate of 63.9%, and on average, utilize 51.3% bed occupancy.
Figure 1 shows trends in the unadjusted HAI, CMS PSI 90, and Total HAC scores (by mean score) over the study period. Hospital scores were separated based on those that would have been penalized had the policy been in effect in 2013 (ie, the worst-performing quartile) from those that would not have been penalized (ie, the performing quartiles) and plotted over time. Only three categories (CAUTI, CLABSI, and CMS PSI 90) were included in the initial 2013 Total HAC scoring. SSI was added in 2014; CDI & MRSA in 2015. For hospitals that would have been penalized in 2013 (ie, in the worst-performing quartile), there was a sharp decrease (improvement) in Total HAC scores from 2013 to 2016 and a more gradual decline for most HAIs across the remainder of the study years.
Figure 1 Annual Average HAI, CMS PSI 90, and Total HAC Scoring for Hospitals based on 2013 placement into the worst-performing or performing quartile groups.
Note: N = 2896 hospitals total, lower scores are better.
Conversely, when reviewing the HAI mean scores of hospitals in the performing quartiles from 2013 – hospitals that would not have been penalized - those scores increased (worsened) over the time period. However, unlike the scores of those later added HAIs in the penalized group, HAIs in the performing quartiles group initially started higher than the evaluated scores of the original HAIs, which could have skewed the Total HAC scoring over those years.
To perform the difference-in-differences analysis, we used JMP (version 17) software25 to run a multiple linear regression model with random effects for the Total HAC scores, each of the HAI scores, and the CMS PSI 90 scores. The Total HAC scoring was modeled first as it is the primary factor analyzed by CMS to note which hospitals would be penalized in their CMS reimbursements. Table 2 shows the consolidated results of our 7 multiple linear regression models, with random effects including all hospital data from 2013 to 2020. The Total HAC score multiple regression model (Table 2, column labeled TOT HAC SCORE) produced a statistically significant negative estimate (−0.412 estimate, p ≤ 0.001) from the interaction term (b3) based on the relationship of the two primary predictor variables (b1, b2), which provides further evidence that the Total HAC scores decreased (improved) over the years for hospitals in the worst-performing quartile from 2013 holding all other variables constant. Hospitals with the worst-performing HAI scores in 2013 saw average annual declines (improvements) across all HAIs and CMS PSIs with the greatest drops in CAUTI (−.400 estimate, p ≤ 0.001), CMS PSI 90 (−0.412 estimate, p ≤ 0.001), a moderate decline in CLABSI (−0.282 estimate, p ≤ 0.001), and minor decreases in SSI (−0.089 estimate, p ≤ 0.001) and CDI (−0.079 estimate, p ≤ 0.01). The only HAI that showed no sign of improvement was MRSA (−0.053 estimate, p > 0.05). Although random effects were expected by running this multiple linear regression model, the random effect was found to not be significant.
Table 2 Results of Multiple Linear Regression Models with Random Effects Analyses from 2013 to 2020
Table 3 shows a cross-tabulation between hospitals in the worst-performing and performing quartiles as of 2013 and whether these hospitals improved or worsened in their scores over the study period. Among hospitals that would have been penalized in 2013 for falling within the worst-performing quartile group, 98.1% showed improvement in their 2020 Total HAC scores compared to their 2013 scores. However, even though most of those worst-performing quartile hospitals had shown improvement in their Total HAC scores, roughly 29% of those hospitals were still found to be in the worst-performing hospital quartile in 2020 (Figure 2). Essentially, although these 29% of hospitals showed improvement in their Total HAC scores, they continued to receive reductions in their reimbursements from CMS. Conversely, among hospitals that would not have been penalized in 2013 (ie, performing quartiles), only 38.8% improved their Total HAC scores with approximately 61.2% (1328/2171) of hospitals having worse Total HAC scores in 2020 than in 2013.
Table 3 Evaluation of Total HAC Score Changes of Hospitals from 2013 to 2020 *quartile Groups (2013) Total HAC Score Changes Crosstabulation (Count)
Figure 2 Hospitals remaining in worst-performing quartile group among those originally expected to be worst-performing in 2013.
Notes: N = 725 hospitals in the worst-performing quartile from 2013 to 2020 among those originally expected to be worst-performing in 2013; only 213 (29%) of hospitals from that group still remain in the worst-performing group by 2020.
DiscussionIn reviewing our findings, we found a significant and meaningful association between the implementation of the HACRP policy and HAI scores through difference-in-differences analyses using a multiple linear regression model with random effects. We found evidence that the policy – which has a main driver of negative reinforcement through a financial penalty – was effective in decreasing HAIs across the hospitals examined in the study. Although previous authors were mixed in their conclusions on the effectiveness of the 2008 HAC program, a precursor to HACRP, many of their analyses were completed prior to the HACRP coming into effect in 2014. Additionally, whereas these other studies only analyzed the overall aggregate HAI reduction with no delineation of groups, the strength of this study was that it focused more on two specific groups (performing and worst-performing) with the HACRP and their changes over time.
Although causality cannot be proved, the findings from our HACRP analysis reflected that HAI scores have improved over the timeline of the study, which at least partially substantiates the purpose and intent behind the program. It appears that applying a financial disincentive across other HACs has the potential to reduce their occurrence in preventing harm to more patients. Further, based on our study findings, policy makers may find a logical basis to maintain and perhaps expand the HACRP to encompass additional patient safety measures.
Policy ImplicationsOverall, improvements were found in the reduction of HAIs from the pre-program to the most available year examined. However, when analyzed separately, the two groups (those hospitals that would have been penalized in the pre-program period and those who would not have been) that make up the entire population of the study, shaped the overall study results in different ways. Due to this, one possible improvement to the program would be to penalize or reward each hospital for their efforts in reducing HAIs by evaluating each hospital’s Total HAC score against itself, year to year.
From a financial perspective, the HACRP penalizes hospitals that fall in the worst-performing quartile group via a 1% reduction in CMS reimbursement. An alternate approach would be to add 0.25% reduction or increase as a penalty or reward for every hospital. As a penalty, hospitals that see an increase (worsening) in their Total HAC scores, would receive a 0.25% reduction in their CMS reimbursement. For reward, hospitals that show improvement (through a decrease in Total HAC scores), would be rewarded an additional 0.25% in their CMS reimbursements. Therefore, the CMS reimbursement reduction/reward would range from −1.25% to +0.25%.
Accomplishing and implementing this solution might help to extend the impact of the HACRP across every hospital, not just those that fall into the worst-performing quartile group. An added benefit of this approach would be that hospitals in the worst-performing quartile that show incremental improvements year-to-year could benefit from their efforts more quickly than under the existing program guidelines (eg, decrease in reduction from 1% to 0.75%). Of course, those facilities that fail to show progress, maintain, or allow their performance to decline would still incur the same penalty under the current policy. Likewise, hospitals that have not previously seen reductions (based on placement in the performance quartiles group), would also be assessed on how well they manage their HAIs year-to-year. Given our findings showing a larger proportion of hospitals made improvements in preventing infections, presumably because they were being directly, negatively reinforced; inclusion of all hospitals in the program may have a broader positive impact on HAIs across the entire US healthcare system.
Study LimitationsAs with every study, this study had its limitations. First, there were limitations around the data. The “yearly” data consisted of an average of 2 years of data, minus the 2020 year that only had 2019 data from a temporary halt in data collection due to the COVID-19 pandemic. Thus, we were unable to observe HAI scores equally across each year. However, the 7-year time period of the study allowed us to gather sufficient data to observe trends. Additionally, as 2013 data were the latest data available to assess in this study, any secular trends related to HAIs that may have been occurring prior to 2013 were unknown and may limit the true impact of the HACRP implementation. Another constraint with the data was that due to the CMS data moving from scored data in 2013 and 2014 to z-scored data in 2015 and beyond, the years from 2015 to 2020 had to be converted manually from z-scores to actual scored data. By not knowing the mean or standard deviations of each HACRP variable for those specific years, we had to estimate those values based on the scored data available (2013 and 2014). Additionally, in not having the exact means and standard deviations for the 2015–2020 data, several hospitals had converted scores that were outside the normal score range for these variables. Thus, based on applying the means and standard deviations from previous years, the data may be skewed more than what the results would have been had CMS produced, and made available, the scored data of each HAI, CMS PSI 90, and Total HAC scores.
Another limiting factor was missing data. Out of the total hospitals that could have been analyzed (3436) within the CMS data set, only 2896 remained after meeting the requirements within the inclusion criteria of having a Total HAC score both in the pre- and post-program periods. Although the 2896 hospitals provided strength to the study, it is unclear how the missing data could have influenced our results. Additionally, though many control variables were included in the study, there could be unaccounted variables that influenced the study’s results. However, our study included a large number of control variables shown to be associated with HAIs in the literature.26–28
ConclusionSince the implementation of HACRP, hospitals that would have been financially penalized in 2013 were found to have had the greatest improvements in the HACRP scores over the years of the study. Conversely, hospitals that would not have been penalized under HACRP guidelines, experienced higher (worsening) HACRP scores over the same years. Our study provides evidence that financial disincentives, based on those at risk of being punished by reimbursement reductions, may lead to the reduction of HAIs. Based on our findings, adding financial penalties and rewards for HAIs on a year-to-year basis may provide incentive for improving efforts to reduce HAIs across hospitals in the US healthcare system.
AcknowledgmentsThis paper is based on the dissertation of Dan Wood.29 It has been published on the institutional website: https://www.proquest.com/pqdtglobal/docview/2802620607/4A428738A7B4966PQ/2?accountid=7014&sourcetype=Dissertations%20&%20Theses.29 The contents of this publication are the sole responsibility of the authors and do not necessarily reflect the views, assertions, opinions, or policies of the US Army Medical Center of Excellence, the Department of the Army, the Department of Defense, or the US Government.
DisclosureThe authors report no conflicts of interest in this work.
References1. Northwest Association for Biomedical Research (NWABR). Ethics background; 2022. Available from: https://www.nwabr.org/sites/default/files/Principles.pdf. Accessed September3, 2022.
2. Haque M, Sartelli M, McKimm J, Abu Bakar M. Health care-associated infections – an overview. Infect Drug Resist. 2018;11:2321–2333. doi:10.2147/IDR.S177247
3. Gidey K, Gidey MT, Hailu BY, Gebreamlak ZB, Niriayo YL. Clinical and economic burden of healthcare-associated infections: a prospective cohort study. PLoS One. 2023;18(2):e0282141. doi:10.1371/journal.pone.0282141
4. Makary M, Daniel M. Medical error – the third leading cause of death in the US. Br Med J. 2016;353:1–5.
5. World Health Organization (WHO). Patient safety; 2019. Available from: https://www.who.int/news-room/fact-sheets/detail/patient-safety. Accessed September6, 2022.
6. Centers for Medicare & Medicaid Services (CMS). Hospital-acquired conditions (present on admission indicator);2021. Available from: https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/HospitalAcqCond. Accessed November6, 2022.
7. Nash DB, Joshi MS, Ransom ER, Ransom SB. The Healthcare Quality Book: Vision, Strategy, and Tools. 4th ed. Health Administration Press; 2019.
8. Centers for Medicare & Medicaid Services (CMS). Hospital-acquired condition reduction program (HACRP);2022. Available from: https://www.cms.gov/medicare/payment/prospective-payment-systems/acute-inpatient-pps/hospital-acquired-condition-reduction-program-hacrp. Accessed November6, 2022.
9. Centers for Medicare & Medicaid Services (CMS). Hospital-acquired condition (HAC) reduction program: scoring methodology. Available from: https://www.cms.gov/files/document/fy-23-hacrp-infographic.pdf. Accessed November6, 2023.
10. Centers for Medicare & Medicaid Services (CMS). Quality measures fact sheet- CMS patient safety indicators PSI 90 (NQF #0531) national quality strategy domain: patient safety; 2019. Available from: https://www.cms.gov/priorities/innovation/files/fact-sheet/bpciadvanced-fs-psi90.pdf. Accessed November7, 2022.
11. Lawton E, Sheetz K, Ryan A. Improving the hospital-acquired condition reduction program through rulemaking. JAMA Health Forum. 2020;1(5):1–3. doi:10.1001/jamahealthforum.2020.0416
12. Lagasse J Hospital-acquired infections keep rising, wasting billions, finds Leapfrog. Healthcare Finance; 2018. Available from: https://www.healthcarefinancenews.com/news/hospital-acquired-infections-keep-rising-wasting-billions-finds-leapfrog. Accessed October23, 2022.
13. The Leapfrog Group. Healthcare-associated infections; 2018. Available from: https://www.leapfroggroup.org/sites/default/files/Files/Leapfrog-Castlight%202018%20HAI%20Report.pdf. Accessed November12, 2022.
14. Sankaran R, Sukul D, Nuliyalu U. Changes in hospital safety following penalties in the US hospital acquired condition reduction program: retrospective cohort study. Br Med J. 2019;366:l4109. doi:10.1136/bmj.l4109
15. Lee GM, Kleinman K, Soumerai SB, et al. Effect of nonpayment for preventable infections in U.S. hospitals. N Engl J Me. 2012;367(15):1428–1437. doi:10.1056/NEJMsa1202419
16. Waters TM, Daniels MJ, Bazzoli GJ, et al. Effect of medicare’s nonpayment for hospital-acquired conditions. JAMA Intern Med. 2015;175(3):347–354. doi:10.1001/jamainternmed.2014.5486
17. Peasah SK, McKay NL, Harman JS, Al-Amin M, Cook RL. Medicare non-payment of hospital-acquired infections: infection rates three years post implementation. Medicare Medicaid Res Rev. 2013;3(3):E1–E16. doi:10.5600/mmrr.003.03.a08
18. Arntson E, Dimick JB, Nuliyalu U, Errickson J, Engler TA, Ryan AM. Changes in hospital-acquired conditions and mortality associated with the hospital-acquired condition reduction program. Ann Surg. 2019;274(4).e301–e307.
19. Sheetz KH, Dimick JB, Englesbe MJ, Ryan AM. Hospital-acquired condition reduction program is not associated with additional patient safety improvement. Health Aff. 2019;38(11):1858–1865. doi:10.1377/hlthaff.2018.05504
20. Definitive Healthcare. Access deeper intelligence about hospitals and health systems; 2024. Available from: https://www.definitivehc.com/data-products/hospital-view. Accessed June19, 2024.
21. Dimick JB, Ryan AM. Methods for evaluating changes in health care policy: the difference-in-differences approach. JAMA. 2014;312(22):2401–2402. doi:10.1001/jama.2014.16153
22. Ryan AM, Burgess JF Jr, Dimick JB. Why we should not be indifferent to specification choices for difference‐in‐differences. Health Serv Res. 2015;50(4):1211–1235. doi:10.1111/1475-6773.12270
23. Torres-Reyna O Panel data analysis fixed and random effects using stata. Princeton.edu; 2007. Available from: https://www.princeton.edu/~otorres/Panel101.pdf. Accessed January3, 2023.
24. Columbia Mailman School of Public Health. Columbia University Irving Medical Center. Difference-in-difference estimation; 2013. Available from: https://www.publichealth.columbia.edu/research/population-health-methods/difference-difference-estimation. Accessed January4, 2023.
25. JMP, version 17. JMP statistical discovery; 2024. Available from: https://www.jmp.com/en_us/home.html. Accessed January10, 2024.
26. Hoffmann BL, Hoffmann BL. Application of long-term care principles in acute care: comparison of nosocomial urinary tract infection rates by case mix. J Am Med Dir Assoc. 2011;12(3):PB5–B6. doi:10.1016/j.jamda.2010.12.024
27. Kaier K, Mutters NT, Frank U. Bed occupancy rates and hospital-acquired infections—should beds be kept empty? Clin Microbiol Infect. 2012;18(10):941–945. doi:10.1111/j.1469-0691.2012.03956.x
28. Hassan M, Tuckman HP, Patrick RH, Kountz DS, Kohn JL. Hospital length of stay and probability of acquiring infection. Int J Pharm Healthc Mark. 2010;4(4):324–338. doi:10.1108/17506121011095182
29. Wood DM An assessment of the relationship between hospital reimbursements and hospital-acquired infections. ProQuest; 2023. Available from: https://www.proquest.com/openview/b265a8dd881b2b2f4a09d412d7321c0c/1?pq-origsite=gscholar&cbl=18750&diss=y. Accessed August29, 2024.
Comments (0)