Healthcare organizations are always on the lookout for the Centers for Medicare & Medicaid Services’ (CMS') annual hospital star ratings, but this week’s release of the quality measures carried a bit more weight for stakeholders than in years prior.
Unveiled Wednesday, the latest star ratings represent CMS’ first crack at a long-awaited refinement of the methodology it uses to generate quality scores ranging from one to five stars.
At the surface level, the distribution of scores posted on CMS’ Hospital Compare website generally shifted upward from last year’s report.
Among the 4,586 eligible hospitals, 13.6% received a five-star rating, 29.5% a four-star rating, 30.3% a three-star rating, 20.6% a two-star rating and 6.1% a one-star rating. Twenty-six percent of hospitals remained unrated.
Last year’s release applied to the same total of 4,586 hospitals. Of these, 8.9% received a five-star rating, 23.7% a four-star rating, 24.4% a three-star rating, 15.5% a two-star rating and 5% a one-star rating. Twenty-one-and-a-half percent were unrated.
The ratings are built on data reported to CMS by hospitals. They are a key public resource for consumers seeking quality care and providers benchmarking their performance for self-improvement, said Beth Godsey, senior vice president of data science and methodology at Vizient, a health analytics and advisory company.
“The organizations that we work with and our membership, first and foremost, are very much interested in having data that captures their operations, that captures their outcomes,” she told Fierce Healthcare. “When the methodology doesn’t necessarily reflect the care they provide or how sick their patients might be in a public format, there’s some concerns about mixed messages, there’s concerns about [an] organization not being reflected as completely as [it] could.”
Those concerns were frequently voiced by providers, analysts and industry groups who took issue with CMS’ previous rating methodology.
Over the years, these stakeholders argued that the logic behind the ratings was difficult to understand due to, among other things, the use of latent variable modeling to combine and weigh multiple different quality measures. Critics said this approach made the scale unpredictable and a poor tool for measuring change from year to year.
“What we found initially were some challenges with the approach that left our members and hospital administrators wondering how they can use this [to] really drive performance improvement and change,” Godsey said. “When the star ratings were announced, [hospitals] would spend the majority of that initial release not really identifying, ‘here’s where we need to improve.’ They were instead focused on, ‘this is what’s gone on with the methodology,’ and trying to understand and dissect and make sense of how that methodology was impacting their score.”
CMS aims for simplicity
Last summer CMS said that it would be overhauling its calculations with a new, less complex system that could be “easily understood by [providers’] organization, explained to patients and used to identify areas for quality improvement.”
The new approach instead reduced the total number of quality measures and groups them into five categories (mortality, safety, readmission, patient experience and timely and effective care) with weighting clearly posted on the agency’s methodology page.
Stakeholders like the American Hospital Association (AHA) and America’s Essential Hospitals tentatively praised the changes when they were announced last summer, and by and large maintained that stance when reviewing the first release.
“[This week’s] star ratings update is an improvement that will likely make the ratings more useful for both patients and hospitals,” the AHA said in a statement.
“For example, we are pleased that CMS is now calculating hospital performance by simple averages, rather than using a previously flawed approach. In addition, we appreciate that CMS reorganized some of the measures so individual topics wouldn’t carry an undue amount of weight in the determination of star ratings. These changes have made the star ratings easier to interpret, more insightful for hospitals working to improve their quality of care and more balanced in favor of high-priority topics, like infections,” the organization said.
Godsey agreed that the new system was “a huge step forward” in terms of predictability.
She said the scores announced this week were generally consistent with what Vizient anticipated when applying the simplified methodology to prior data. Additionally, early conversations with the group’s provider partners suggest that hospitals have been better able to respond to the ratings they receive.
“The swings that hospitals experienced prior to this release were a bit more unanticipated and difficult for hospitals to predict,” she said. “What I’ve seen in the data, and just some feedback from some of our members as we’re reviewing, [is that] now they have an understanding at least about why their star rating has changed. Now [they] can really double down on using the framework as an opportunity for improvement.”
Still room for refinement
Pleased as they were with efforts to update the system, stakeholders were not about to let CMS rest on its laurels.
“While these new ratings appear to show incremental improvement among our hospitals, and the new methodology is a step in the right direction, we are not yet to a point where the ratings give patients a complete picture,” Maryellen Guinan, principal policy analyst for America’s Essential Hospitals, told Fierce Healthcare in an emailed statement.
“The lack of risk adjustment for the readmissions measure group remains a concern, and we call on the agency to align risk adjustment policies for that group with those for other programs, such as the Hospital Readmissions Reduction Program. We will continue to work with CMS to ensure the star ratings give consumers a full, accurate and fair way to compare hospitals,” she said.
The AHA’s statement, meanwhile, warned that frequent changes to the star rating’s methodology over the past several years could be “very misleading” to patients and providers comparing scores over the years. It also called out “flaws in the methodology” that remained untouched and said that other adjustments to the process “might not have the effect that CMS hopes.”
“We remain concerned that CMS’ failure to account for social risk factors in calculating performance on measures like readmissions biases the ratings against those hospitals caring for more vulnerable patients,” the organization wrote. “Further, while we agree with the intent of CMS’ new peer grouping approach—that is, to create a more level playing field between hospitals offering differing levels of care—we believe it needs improvement to ensure it fosters equitable comparisons.
Godsey echoed AHA’s peer grouping criticisms. Placing hospitals within similar cohorts can be a useful tool benchmarking performance against similar organizations, she said, and certainly benefits patients considering care at two different facilities.
However, “the underlying logic and criteria with which they define those specific types of cohorts still shows some potential opportunities to refine,” she said. “A large academic medical center versus a small critical access hospital—they may both be a five-star organization, but as a patient you might be seeking certain services that a critical access hospital couldn’t offer versus an academic medical center that could.”
Godsey similarly called attention to the timeliness of the data driving hospitals’ overall ratings. In certain heavily weighted areas such as mortality, she said that CMS is incorporating data that are two or three years old to generate its scores.
“The most current information is not reflected in what was released [this week],” she said. “If CMS were to potentially use more current information, more current data that they have access to, that would also make the star ratings more actionable not only for providers, but for patients.”
With the final goal of transparency, simplicity and usability in mind, there’s also something to be said for how the agency is communicating the nuances of these ratings to patients who are unfamiliar with the methodology.
For instance, the data CMS leverages to reach its scores are largely generated by older Medicare patients, Godsey said, and therefore do not reflect the average care of the general population.
To further improve the value of these ratings to healthcare consumers, CMS needs to “be clear that obstetric patients or pediatric patients or families who are looking for certain types of services may not be reflected completely in these star ratings,” she said. “There’s certainly value in what they are sharing from the 65-and-older Medicare population, but there’s certainly opportunities there to explore [what that means for] certain types of populations.”