Certifying America's Best Hospitals: A Comparison of Consumer Oriented Hospital Ranking Systems
MetadataShow full item record
BACKGROUND: Publicly available hospital rankings have the potential to improve hospital care by guiding patients to higher quality facilities and spurring quality improvement in lower-ranking hospitals. In 2003 the Centers for Medicare and Medicaid Services started requiring hospitals to report certain healthcare quality metrics. Since then many organizations have used this and other data to generate hospital ranking lists. Each organization is at least partially aimed at consumers and claims that their rankings will help them to find the "best hospitals" in their region and in the nation. While hailed by patient advocates and other groups, these ranking systems have received criticism from many stakeholders in healthcare. Such criticisms have included questions as to the validity of these organizations' methodologies and questions about what "best" means when it comes to quality in health care. While some have compared the different methodologies of these organizations, to date there has been no head to head comparison of the concordance between each organization's rankings. OBJECTIVE: Main research question: What is the concordance between publicly reported consumer-oriented hospital rankings, and do they each measure the same variables? Aim 1: Assess the magnitude of concordance between consumer oriented hospital ranking lists. Aim 2: Assess the similarities and differences in domains and methods used in calculating hospital rankings for each list. METHODS: Using a Google search for terms including "best hospital," "number one hospital," and other terms that a consumer might use the author identified multiple hospital ranking systems. Organizations included in this analysis were restricted to those with a nationwide hospital ranking list based on publically reported and/or individually collected data. Their data also had to be accessible without a membership fee or subscription. The author found five qualifying organizations: the Leapfrog Hospital Survey, Consumer Reports' Hospital Rankings, Healthgrades' America's Best Hospitals, Truven Health Analytics' Top 100 Hospitals, and the US News and World Report's Best Hospitals. In accordance with Aim 1, each organization's rank list was accessed and assessed for concordance. In accordance with Aim 2, the methodologies from each organization were compiled and assessed for types of metrics used. Metrics were organized using the Donabedian (Donabedian, 1966) framework as a model, dividing metrics into Outcome, Process, Patient Satisfaction, and Other (including structural) categories. RESULTS: Results from Aim 1 suggest that there is marked discordance between consumer oriented hospital ranking lists. Results from Aim 2 suggest that although most hospitals use data from the CMS database, they vary widely in the number and type of metrics that they use in calculating their ratings. CONCLUSION: The high level of discordance between ranking lists that all make similar claims (to help consumers find the "best" hospital) is likely frustrating to consumers. Given the high level of discordance between hospital ranking systems, many consumers and stakeholders may be prompted to ask whether such efforts serve any purpose in determining hospital quality. Results from Aim 2 suggest that, while frustrating, such results are not entirely unexpected. Different organizations using different methodologies to analyze different metrics are likely to generate different ratings. Future research should focus on determining which, if any, of these current hospital ranking methodologies correlates to patient-centered measures of quality.