ﺑﺎﺯﮔﺸﺖ ﺑﻪ ﺻﻔﺤﻪ ﻗﺒﻠﯽ
خرید پکیج
تعداد آیتم قابل مشاهده باقیمانده : 3 مورد
نسخه الکترونیک
medimedia.ir

Measuring quality in hospitals in the United States

Measuring quality in hospitals in the United States
Literature review current through: Jan 2024.
This topic last updated: Mar 01, 2022.

INTRODUCTION — Since the early 1990s, health plans in the United States have been measuring and publicly reporting their performance on measures of quality of care. In part, this was a response to health care purchasers who sought better information about the quality of care they were purchasing. Performance measurement and reporting has now become commonplace in most health care settings.

Predated by regional efforts [1], national efforts to measure and report hospital performance on quality measures began with a pilot program of the Joint Commission on Accreditation of Healthcare Organizations (JCAHO, subsequently renamed "The Joint Commission") [2]. Beginning in 2001, the Joint Commission, the Centers for Medicare and Medicaid Services (CMS), the American Hospital Association, and other organizations formed the Hospital Quality Alliance (HQA) as a mechanism through which hospitals could submit performance data to CMS [3,4]. Hospital participation in the HQA was voluntary. However, the Medicare Modernization Act of 2003 made receipt of a hospital's full Medicare payment updates contingent upon reporting the initial 10-measure "starter set" to CMS, and since 2012, hospital quality measures have been incorporated into CMS hospital payment rates. Consequently, the vast majority of acute care hospitals in the United States participate in this reporting.

Hospital-level performance data, which can be searched by geographic location, category of health condition (eg, general, medical, surgical), and quality measure are available to the public at the Hospital Compare website. The original "starter set" reported in 2004 reflected processes of care for only three health conditions (acute myocardial infarction, heart failure, and pneumonia), which are among the most common and clinically important reasons for hospitalization among Medicare beneficiaries [3,4]. The range of measures reported on Hospital Compare has been expanding steadily to include measures of process quality for additional conditions, rates of hospital acquired infections or complications, risk-adjusted mortality for a variety of conditions, patient experience, and utilization of health care services, including utilization of imaging services and readmissions, and structural measures such as participation in a cardiac surgery registry and use of safe surgery checklists.

QUALITY MEASUREMENT AND QUALITY IMPROVEMENT — Despite having higher health care spending per capita than the health care systems of other industrialized countries, compelling evidence accumulated over the last two decades suggests that the quality of care delivered by the United States health care system is suboptimal [5]. A core principle of quality improvement is that what is not measured cannot be improved. Consequently, performance measurement and reporting has become ingrained in our health care system.

The ultimate goal of quality measurement and reporting systems is to improve care and outcomes. Efforts to improve documentation without changing the content of clinical care are unlikely to achieve this goal. For quality measurement and reporting efforts to be successful, hospitals and clinicians must engage in efforts to understand the root causes of poor performance and develop fundamentally better systems of patient care that will lead to improved performance across a broad range of potential measures.

MEASURING THE QUALITY OF HEALTH CARE — Quality measurement in health care rests on a conceptual framework presented by Avedis Donabedian in a seminal 1966 paper [6]. Donabedian's framework conceptualizes three categories of quality, and quality measures have been developed in each of these areas:

Outcomes represent the ultimate goal of health care and include measures of patients' quantity and quality of life. The risk-adjusted 30-day mortality rate following hospital discharge is an example of an outcome measure of quality. Health outcomes are affected by health care, but they are also strongly influenced by patient factors such as prior health status and health behaviors. Consequently, in order to make fair comparisons among health care providers, outcome measures of quality require robust adjustments for other risk factors that might influence health outcomes [7,8].

Health care processes represent the delivery of specific clinical services to patients. Targeted health care processes have been shown to be associated with better health outcomes in controlled clinical trials. Process measures are often based upon clinical guidelines. (See "Overview of clinical practice guidelines".)

Structure represents the characteristics of individual health care providers, organizations, and facilities. A hospital's possession of an electronic health record, an intensive care unit, or its percentage of board-certified clinicians are examples of structural measures of quality. Structural characteristics can enhance quality by enabling better performance on processes of care for patients that in turn lead to better health outcomes.

In addition to these three categories, an additional component of outcomes, referred to as "intermediate outcomes," is often measured. Intermediate outcomes are measures of clinical conditions that do not directly reflect patients' quality or quantity of life [9]. Achievement of recommended blood pressure, lipid, or hemoglobin A1c (A1C) targets in diabetic patients are all examples of intermediate outcomes. Better performance on intermediate outcomes should lead to better health outcomes (eg, lower rates of morbidity and mortality). Intermediate outcomes are also called surrogate outcomes and are studied in many clinical trials. For instance, determining whether an intervention can achieve lower blood pressure or A1C is often more readily measured (requiring fewer study patients or shorter trial time) than longer-term outcomes such as mortality or incidence of new myocardial infarctions.

Validity — In measurement theory, "validity" represents the extent to which a given measure actually captures what it is supposed to measure [10]. For quality measurement in health care, this seemingly obvious statement has at least two important implications:

Process and structural measures of quality are valid only if better performance on these measures actually results in better health outcomes. For example, consider a hypothetical quality measure that counts the percentage of patients with condition X who receive treatment Y. This is a process measure, and it is valid only if treatment Y actually leads to better health outcomes for patients with condition X (on average).

In general, the validity of process and structural measures of quality rests on published studies that demonstrate causal links between the measured processes or structures and health outcomes. If such studies are absent or flawed, process and structural measures will have low validity. A good example of a quality measure with questionable validity is the intermediate outcome of achieving A1C <7 in patients with non-insulin dependent diabetes, particularly for older adult patients. For this measure to be valid, lowering A1C below 7 must lead to better health outcomes; however, the ACCORD trial showed worse health outcomes, including mortality, for these patients [11]. (See "Glycemic control and vascular complications in type 2 diabetes mellitus", section on 'Macrovascular disease'.)

A valid measure of the quality of care delivered by a provider should also, at least in part, represent something under the provider's control [7,8]. This criterion is especially important for measures of health outcomes, which are strongly influenced by the patient's prior health status and behaviors.

Measures of process and structure are generally felt to be more directly under providers' control, and, in contrast to outcomes measures, these measures rarely are adjusted for patient case mix. In general, process measures have restrictive criteria for patient inclusion. These criteria create clinical uniformity among the included patients. All patients who qualify for a process measure should be receiving the measured service, according to the guidelines underlying the measure (eg, measures of rates of beta blocker use after myocardial infarction only include patients who have had a myocardial infarction and exclude those with a contraindication to beta blocker use).

Despite wide acknowledgment that patient factors such as adherence and socioeconomic status also can influence performance on process measures, these measures rarely are adjusted for such patient factors. This lack of adjustment may be due, in part, to concerns about perpetuating sociodemographic disparities in care and "letting providers off the hook" for patient adherence, which providers can influence [7]. None of the process measures presented on Hospital Compare are adjusted for patient case mix, although a 2014 report from National Quality Forum (NQF) recommended adoption of adjustment for sociodemographic factors for at least some measures in some settings [12,13].

As of 2021, the Centers for Medicare & Medicaid Services (CMS) has not adopted such adjustments for process measures reported on the Hospital Compare site, although CMS stratifies hospitals by the percentage of Medicare admissions that are dually eligible for Medicaid when calculating relative performance for the Hospital Readmissions Reduction Program (HRRP). A 2016 report to Congress from the Assistant Secretary for Planning and Evaluation required under the 2014 IMPACT Act concluded that beneficiaries with social risk factors had worse performance across all of Medicare’s value-based payment programs and that “providers that disproportionately served beneficiaries with social risk factors tended to have worse performance on quality measures, even after adjusting for their beneficiary mix” [14].

Given its downsides such as hiding true disparities in care, statistical adjustment for sociodemographic factors is likely to remain controversial. Alternatives to statistical adjustment include stratification by patient characteristics (for public reporting) and a wide variety of options for payment (aiming to reduce any undesired wealth effects while preserving incentives to improve performance among sociodemographically disadvantaged populations) [15].

Reliability — Reliability reflects the degree of precision in quality measurement (ie, the degree to which differences in measured performance between providers represent true differences in performance, rather than measurement error). Reliability is a key determinant of the likelihood that a particular hospital will be misclassified as high or low performing strictly due to chance.

As a general rule, the reliability of a hospital's performance on a given quality measures increases as the number of observations increases. Hospital Compare recognizes the lack of reliability associated with a small sample size and designates a warning with some performance data: "The number of cases is too small to be sure how well a hospital is performing." When this warning is present, a hospital's true performance (ie, how well one could expect the hospital to perform, given additional patients qualifying for the measure) could diverge widely from the performance displayed.

Further information about reliability related to quality measurement can be found in a web-based primer and a related shorter, less-technical article [7,16]. More rigorous technical treatments of reliability are also available [17].

RATIONALE FOR MEASURING AND REPORTING PERFORMANCE — There are two primary reasons to measure and publicly report hospitals' performance on quality measures:

To inform patients' choice of hospitals for their own health care [18]. Hospital Compare has been designed to be understandable to patients. However, research indicates that patients rarely use publicly reported quality measure performance data to choose their health care providers [19-21]. Instead, patients tend to rely on recommendations from close acquaintances or their clinicians. Patients' use of publicly reported performance data may increase if reporting entities can find ways to deliver personally relevant performance data to patients in a timely fashion (ie, at the moment when patients are choosing a provider).

To motivate hospitals to improve their performance [22]. If patients make choices based on performance data, then performance improvement may attract more patients. Additionally, making performance data publicly available may motivate improvement by appealing to hospital leaders' and clinicians' sense of professional pride and responsibility [23,24].

The other major use of performance measurement data is to help determine payment for care, such as under reward or penalty “pay for performance” programs. For instance, under CMS’s Hospital Value-Based Purchasing Program, a portion of participating hospitals’ payments is withheld and used to fund incentive payments based on performance data (eg, 2 percent of payments are withheld and are then allocated among participating hospitals based on a combination of achievement and improvement calculated using measures in four categories: clinical outcomes, patient experiences and care coordination, safety, and efficiency; each category is weighted at 25 percent in the calculation) [25]. With some exceptions, hospitals across the country participating in the inpatient prospective payment system are included in this program [26]. CMS also has hospital payment programs related to hospital-acquired infections (the bottom 25 percent of hospitals are penalized) and readmissions under the auspices of the Hospital Readmissions Reduction Program (HRRP) mentioned above.

Similarly, many health plans incorporate measures of hospital or clinician performance in their "pay for performance" contracts, which provide increased payments for delivering higher-quality care. In addition, some health plans are choosing hospitals or clinicians for tiered copayment or limited network products based on costs and quality performance, and performance measurement is an integral part of many global payment contracts such as under Medicare’s Accountable Care Organization programs.

ORGANIZATIONS GENERATING MEASURES OF HEALTH CARE QUALITY — There are several stakeholder organizations in the United States involved in defining clinical quality measures. A challenge for hospital systems is to develop processes for improving and reporting performance, both of which can be further complicated when quality measures differ across organizations and/or insurers. As a result, many organizations involved in defining quality measures have agreed upon a few sets of measures that can be reported uniformly across hospital systems, permitting a standardized benchmark for comparing hospital performance.

The Joint Commission and the National Committee for Quality Assurance (NCQA) are among the two most prominent organizations developing measures of hospital performance. The National Quality Forum (NQF) and the Agency for Healthcare Research and Quality (AHRQ) are sources for validated quality measures relevant to hospital care.

REPORTED MEASURES OF HOSPITAL QUALITY — Hospital Compare, a quality tool provided by Medicare, provides performance data on process measures in multiple clinical areas. The process measures reported on Hospital Compare reflect hospital-based care that is delivered to all eligible patients for that measure (not just Medicare beneficiaries). Some but not all of these measures are reported separately for inpatient and outpatient hospital departments.

Details about the specific measures publicly reported are provided on the Hospital Compare website. The measures are categorized as "timely and effective care," readmissions, patient safety (eg, complications and health care-associated infections) and deaths, as well as patient experience, and measures of value-related payment amounts are also available. Hospitals report quarterly performance on these measures by abstracting patient medical records or through measures calculated using administrative data; performance data are audited by Centers for Medicare and Medicaid Services (CMS) contractors [27].

CONTROVERSIES IN QUALITY MEASURE REPORTING

Overview — The most important controversies in reporting hospital quality process measures center on questions about the validity and reliability of certain quality measures, the incorporation of controls for social risk factors (and other forms of risk adjustment) (see 'Validity' above), and the potential for adverse unintended consequences of reporting such measures. Additionally, as the number of reported measures increases, the ability of hospitals to focus quality improvement efforts becomes diluted and the effort required to collect the data increases. Moreover, because not all aspects of care can be measured, reporting programs might serve to focus hospital quality improvement efforts on rewarded measures (eg, reducing readmissions) instead of areas that they might have considered higher impact that are not rewarded.

Concerns have also been raised that hospitals treating disadvantaged patients or those from underrepresented groups might be adversely impacted by measurement. If such hospitals lose resources under value-based purchasing (“pay-for-performance”) programs, the vulnerable populations they serve could suffer as a consequence. Statistical risk adjustment is frequently suggested as a strategy for addressing this type of adverse unintended consequence of value-based purchasing. However, while risk adjustment might soften the financial blow to safety net hospitals, such statistical adjustment also “hides” disparities in care by making them appear to vanish, even though they persist in reality. Therefore other approaches, such as stratification and explicit offsetting payments to safety net hospitals, are potentially attractive alternatives to risk adjustment [15,28].

Validity of measures — Validity is a pervasive concern for process measures. For example, better hospital performance on processes of care for acute myocardial infarction has been only weakly related to lower 30-day acute myocardial infarction mortality rates [29], and having a home asthma management plan was not associated with fewer subsequent emergency department visits or hospitalizations [30].

Among the process measures reported on Hospital Compare, the measure of initial antibiotic timing in pneumonia was the most controversial and has been dropped. This measure was one of the initial 10 "starter set" measures and originally reflected the percentage of patients with pneumonia who received their first antibiotic dose within four hours of hospital arrival. The measure was criticized for having questionable validity [31,32]; the studies underlying this measure were observational [33,34] and were felt by some to provide only weak evidence of a causal link between antibiotic timing and health outcomes for patients [35,36].

It is also important that the measures clearly identify appropriate patients for inclusion, so that interventions are not promoted that would be inappropriate or unlikely to improve outcomes for certain patient groups, such as those with limited life expectancy [37].

Unintended consequences — If a possible approach to improving measured performance seems antithetical to good patient care, then such an approach is obviously inadvisable. Responding to performance measurement does not absolve health care providers of their duty to uphold the principle of beneficence in patient care.

Nevertheless, it is possible that clinicians, aiming to achieve targets for a specific process component, might adopt behaviors that could negatively impact patient care, even if unintentional. As an example, some commentators were concerned that in an effort to improve performance on the antibiotic timing measure (which is no longer reported), hospital emergency departments might prematurely diagnose pneumonia and give antibiotics to patients with respiratory symptoms who might not require them [38-41]. It is also possible that patients with respiratory symptoms would be given inappropriately high triage priority (ahead of patients with non-respiratory conditions of greater true urgency). Two early single-institution studies suggested that these adverse unintended consequences were occurring [42,43]; however, two subsequent studies (one multi-institutional and the other nationally representative) found no evidence that these unintended consequences actually materialized [44,45]. In 2007, the Hospital Quality Alliance (HQA) increased the timeliness threshold in the antibiotic timing measure from four hours to six hours [46], and the Infectious Diseases Society of America completely eliminated the timing threshold from its guidelines for treating pneumonia [47]. This measure was removed from public reporting in 2012.

In another example, quality measures for hospitalized pneumonia patients previously included vaccinations for influenza and pneumococcal pneumonia. It is likely that many hospitalized patients are receiving unnecessary repeated vaccinations because prior vaccination has been given in an ambulatory or hospital setting but prior documentation is either not recorded, not available, or not looked into due to expediency. Whether duplicative vaccinations result in patient harm is unclear.

Little opportunity for improvement — There may be little rationale for reporting hospital performance on measures for which performance is uniformly high (ie, average performance near 100 percent such as two of the process measures for childhood asthma) [30]. Patients cannot use measures with uniformly high performance to distinguish among hospitals (ie, which means that these measures have low reliability), and the vast majority of hospitals have no room for meaningful improvement. Some have advocated "retiring" such quality measures from public reports, as has been done with oxygen saturation assessment for patients with pneumonia [48], and measures such as beta blocker use after a heart attack. However, if there are some outlier hospitals with persistently low performance on such measures, then there may be a good case for continuing to report them.

Global composite measures of performance — In addition to performance data on individual measures, Hospital Compare displays a single overall star rating for each hospital. This overall star rating is a global composite measure of seven underlying performance domains and is intended to simplify public reporting for the public. However, the weights applied to each domain are arbitrary, and alternative weightings could produce different overall hospital star ratings (including different relative rankings of hospitals within a market area). Some have called for such global rankings to offer user-determined weights, which can improve public understanding of how much the weights matter and allow users to apply weights that reflect their own needs, values, and preferences [49,50].

IMPROVING PERFORMANCE — Public reporting of hospital-level performance measures has been associated with significant improvements over time across United States hospitals [51]. In addition, there is some evidence that public reporting of hospital performance on hospital quality measures has led to better patients outcomes [52]. However, there have been few systematic evaluations to identify which particular efforts to improve hospital performance are most effective.

Cross-sectional analyses of differences between higher- and lower-performing hospitals have found that higher-performing hospitals are likelier to have made greater investments in advanced technology (including electronic health records) and higher nurse-to-patient ratios [3,53]. These studies suggest that by making such investments, lower-performing hospitals may achieve performance improvement. Examples of successful initiatives include the Michigan Surgical Quality Collaborative, which focuses on improving surgical care [54] and Project RED, an initiative at an urban hospital that developed low-literacy appropriate patient education materials and utilized a dedicated "discharge advocate" to coordinate discharge support for adult medical patients [55]. A prominent study of door-to-balloon found that better-performing hospitals were more likely than worse performers to employ one of a number of strategies to facilitate early catheterization laboratory activation by emergency department staff [56]. The strategies included direct activation of the catheterization lab by the emergency clinician, use of a single page number for activating the catheterization lab, en route transmission of the electrocardiogram (ECG) and potential early activation of the catheterization lab, presence of a cardiologist around the clock at the hospital, and provision of real-time feedback to emergency department and catheterization laboratory staff.

However, results of observational studies should be treated judiciously, since associations between hospital investments and process performance may not be causal. Multiple confounders might account for the differences, such as the quality of hospital leadership, the culture among hospital staff, or unadjusted differences in patient populations. Future studies that seek to identify the strategies employed by hospitals with unusually impressive improvements in quality measures (ie, studies of "positive deviance") may supply valuable guidance on quality improvement [57].

SUMMARY AND RECOMMENDATIONS

A consortium of organizations, including the federal government, has been collecting and publicly reporting data on the quality of care delivered by United States hospitals in a number of domains since 2004. The Centers for Medicare and Medicaid Services (CMS) reports measures of process quality, mortality, patient experience, and utilization of health care. (See 'Introduction' above.)

A major goal of quality measurement is to use the results as the basis for quality improvement programs and ultimately to improve care and outcomes, but increasingly they are also being used in payment programs. (See 'Quality measurement and quality improvement' above.)

Health care quality measures can be categorized as measuring outcomes, processes, and structure. Validity and reliability are two important attributes of a measure. Validity represents the extent to which a measure captures what is meant to be measured; reliability reflects the degree to which measured differences are true differences in performance or are due to chance. (See 'Measuring the quality of health care' above.)

Two reasons to publicly report quality measures are to inform patients' choice of health care providers and to motivate improved performance. Measures are also frequently used as the basis for pay-for-performance programs. (See 'Rationale for measuring and reporting performance' above.)

Hospital Compare provides performance data on numerous domains of quality measures. (See 'Reported measures of hospital quality' above.)

Quality measures have limitations. Some process measures may not be robustly associated with the intended outcome; efforts to improve could conceivably produce unintended consequences; and some measures may already be widely maximally achieved, with little opportunity for improvement. (See 'Controversies in quality measure reporting' above.)

  1. Rosenthal GE, Hammar PJ, Way LE, et al. Using hospital performance data in quality improvement: the Cleveland Health Quality Choice experience. Jt Comm J Qual Improv 1998; 24:347.
  2. Williams SC, Schmaltz SP, Morton DJ, et al. Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004. N Engl J Med 2005; 353:255.
  3. Landon BE, Normand SL, Lessler A, et al. Quality of care for the treatment of acute medical conditions in US hospitals. Arch Intern Med 2006; 166:2511.
  4. Jha AK, Li Z, Orav EJ, Epstein AM. Care in U.S. hospitals--the Hospital Quality Alliance program. N Engl J Med 2005; 353:265.
  5. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003; 348:2635.
  6. Donabedian A. Evaluating the quality of medical care. Milbank Q 1966; 44:166.
  7. Friedberg MW, Damberg CL. Methodological Considerations in Generating Provider Performance Scores for Use in Public Reporting: A Guide for Community Quality Collaboratives. Agency for Healthcare Research and Quality, US Department of Health and Human Services. Rockville, MD: 2011. Available at: http://www.ahrq.gov/qual/value/perfscoresmethods/index.html (Accessed on October 03, 2011).
  8. Landon BE, Normand SL, Blumenthal D, Daley J. Physician clinical performance assessment: prospects and barriers. JAMA 2003; 290:1183.
  9. Physician Consortium for Performance Improvement. Measures Development, Methodology, and Oversight Advisory Committee: Recommendations to PCPI Work Groups on Outcomes Measures. American Medical Association. Chicago: 2011.
  10. Hays RD, Fayers P. Reliability and validity (including unresponsiveness). In: Assessing quality of life in clinical trials: methods and practice, 2nd ed., Fayers P, Hayes RD (Eds), Oxford University Press, New York 2005. p.25.
  11. Action to Control Cardiovascular Risk in Diabetes Study Group, Gerstein HC, Miller ME, et al. Effects of intensive glucose lowering in type 2 diabetes. N Engl J Med 2008; 358:2545.
  12. jama.jamanetwork.com/article.aspx?articleid=2084900 (Accessed on January 19, 2015).
  13. Risk adjustment for socioeconomic status or other sociodemographic factors. National Quality Forum. http://www.qualityforum.org/Publications/2014/08/Risk_Adjustment_for_Socioeconomic_Status_or_Other_Sociodemographic_Factors.aspx (Accessed on January 19, 2015).
  14. Office of the Assistant Secretary for Planning and Evaluation. Report to Congress: Social risk factors and performance under Medicare's value-based purchasing programs. US Department of Health and Human Services, Washington, DC 2016. Available at: https://aspe.hhs.gov/system/files/pdf/253971/ASPESESRTCfull.pdf (Accessed on March 02, 2020).
  15. Friedberg MW, Safran DG, Coltin K, et al. Paying for performance in primary care: potential impact on practices and disparities. Health Aff (Millwood) 2010; 29:926.
  16. Friedberg MW, Damberg CL. A five-point checklist to help performance reports incentivize improvement and effectively guide patients. Health Aff (Millwood) 2012; 31:612.
  17. Adams JL, Mehrotra A, McGlynn EA. Estimating reliability and misclassification in physician profiling. RAND Corporation. Santa Monica: 2010.
  18. Hibbard JH. Engaging health care consumers to improve the quality of care. Med Care 2003; 41:I61.
  19. Schneider EC, Epstein AM. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA 1998; 279:1638.
  20. Fung CH, Lim YW, Mattke S, et al. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008; 148:111.
  21. Faber M, Bosch M, Wollersheim H, et al. Public reporting in health care: how do consumers use quality-of-care information? A systematic review. Med Care 2009; 47:1.
  22. Berwick DM, James B, Coye MJ. Connections between quality measurement and improvement. Med Care 2003; 41:I30.
  23. Marshall MN, Shekelle PG, Leatherman S, Brook RH. The public release of performance data: what do we expect to gain? A review of the evidence. JAMA 2000; 283:1866.
  24. Hibbard JH. What can we say about the impact of public reporting? Inconsistent execution yields variable results. Ann Intern Med 2008; 148:160.
  25. Medicare Learning Network. Hospital value-based purchasing. Centers for Medicare and Medicaid Services 2017. https://www.cms.gov/Outreach-and-Education/Medicare-Learning-Network-MLN/MLNProducts/downloads/Hospital_VBPurchasing_Fact_Sheet_ICN907664.pdf (Accessed on February 07, 2018).
  26. www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/hospital-value-based-purchasing/Downloads/FY-2013-Program-Frequently-Asked-Questions-about-Hospital-VBP-3-9-12.pdf (Accessed on January 19, 2015).
  27. www.hospitalcompare.hhs.gov/Data/AboutData/About.aspx.
  28. Damberg CL, Elliott MN, Ewing BA. Pay-for-performance schemes that use patient and provider categories would reduce payment disparities. Health Aff (Millwood) 2015; 34:134.
  29. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality. JAMA 2006; 296:72.
  30. Morse RB, Hall M, Fieldston ES, et al. Hospital-level compliance with asthma care quality measures at children's hospitals and subsequent asthma-related outcomes. JAMA 2011; 306:1454.
  31. Wachter RM, Flanders SA, Fee C, Pronovost PJ. Public reporting of antibiotic timing in patients with pneumonia: lessons from a flawed performance measure. Ann Intern Med 2008; 149:29.
  32. Yu KT, Wyer PC. Evidence-based emergency medicine/critically appraised topic. Evidence behind the 4-hour rule for initiation of antibiotic therapy in community-acquired pneumonia. Ann Emerg Med 2008; 51:651.
  33. Meehan TP, Fine MJ, Krumholz HM, et al. Quality of care, process, and outcomes in elderly patients with pneumonia. JAMA 1997; 278:2080.
  34. Houck PM, Bratzler DW, Nsa W, et al. Timing of antibiotic administration and outcomes for Medicare patients hospitalized with community-acquired pneumonia. Arch Intern Med 2004; 164:637.
  35. Dedier J, Singer DE, Chang Y, et al. Processes of care, illness severity, and outcomes in the management of community-acquired pneumonia at academic hospitals. Arch Intern Med 2001; 161:2099.
  36. Silber SH, Garrett C, Singh R, et al. Early administration of antibiotics does not shorten time to clinical stability in patients with moderate-to-severe community-acquired pneumonia. Chest 2003; 124:1798.
  37. Lee SJ, Walter LC. Quality indicators for older adults: preventing unintended harms. JAMA 2011; 306:1481.
  38. Thompson D. The pneumonia controversy: hospitals grapple with 4 hour benchmark. Ann Emerg Med 2006; 47:259.
  39. Pines JM. Profiles in patient safety: Antibiotic timing in pneumonia and pay-for-performance. Acad Emerg Med 2006; 13:787.
  40. Pronovost PJ, Miller M, Wachter RM. The GAAP in quality measurement and reporting. JAMA 2007; 298:1800.
  41. Metersky ML. Measuring the performance of performance measurement. Arch Intern Med 2008; 168:347.
  42. Kanwar M, Brar N, Khatib R, Fakih MG. Misdiagnosis of community-acquired pneumonia and inappropriate utilization of antibiotics: side effects of the 4-h antibiotic administration rule. Chest 2007; 131:1865.
  43. Welker JA, Huston M, McCue JD. Antibiotic timing and errors in diagnosing pneumonia. Arch Intern Med 2008; 168:351.
  44. Fee C, Metlay JP, Camargo CA Jr, et al. ED antibiotic use for acute respiratory illnesses since pneumonia performance measure inception. Am J Emerg Med 2010; 28:23.
  45. Friedberg MW, Mehrotra A, Linder JA. Reporting hospitals' antibiotic timing in pneumonia: adverse consequences for patients? Am J Manag Care 2009; 15:137.
  46. Mitka M. JCAHO tweaks emergency departments' pneumonia treatment standards. JAMA 2007; 297:1758.
  47. Mandell LA, Wunderink RG, Anzueto A, et al. Infectious Diseases Society of America/American Thoracic Society consensus guidelines on the management of community-acquired pneumonia in adults. Clin Infect Dis 2007; 44 Suppl 2:S27.
  48. Lee TH. Eulogy for a quality measure. N Engl J Med 2007; 357:1175.
  49. Rumball-Smith J, Gurvey J, Friedberg MW. Personalized Hospital Ratings - Transparency for the Internet Age. N Engl J Med 2018; 379:806.
  50. Rand Health Care. Personalized hospital performance report card. Available at: https://www.rand.org/health-care/projects/personalized-hospital-performance-report-card.html (Accessed on March 05, 2020).
  51. Chassin MR, Loeb JM, Schmaltz SP, Wachter RM. Accountability measures--using measurement to promote quality improvement. N Engl J Med 2010; 363:683.
  52. Werner RM, Bradlow ET. Public reporting on hospital process improvements is linked to better patient outcomes. Health Aff (Millwood) 2010; 29:1319.
  53. Elnahal SM, Joynt KE, Bristol SJ, Jha AK. Electronic health record functions differ between best and worst hospitals. Am J Manag Care 2011; 17:e121.
  54. Campbell DA Jr, Englesbe MJ, Kubus JJ, et al. Accelerating the pace of surgical quality improvement: the power of hospital collaboration. Arch Surg 2010; 145:985.
  55. Jack BW, Chetty VK, Anthony D, et al. A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med 2009; 150:178.
  56. Bradley EH, Herrin J, Wang Y, et al. Strategies for reducing the door-to-balloon time in acute myocardial infarction. N Engl J Med 2006; 355:2308.
  57. Bradley EH, Curry LA, Ramanadhan S, et al. Research in action: using positive deviance to improve quality of health care. Implement Sci 2009; 4:25.
Topic 14592 Version 34.0

References

آیا می خواهید مدیلیب را به صفحه اصلی خود اضافه کنید؟