Rating the raters? Hospital experts turn tables and critique quality comparison sites

It's no secret that many health systems aren't always fans of hospital comparison websites.

Some of the top sources of information to earn their ire are U.S. News & World Report, the Centers for Medicare & Medicaid Services’ (CMS') Star Ratings, the Leapfrog Group and Healthgrades.

In a blog posted Wednesday on NEJM Catalyst, a healthcare delivery publication from The New England Journal of Medicine, a group of methodology experts from both academic health centers and the private sector decided to examine some of the top grading systems.

And, of course, they decided to give out some of their own grades.

Authors of the article hailed from Northwestern Medicine, Sound Physicians, the Council of Medical Specialty Societies, the University of Michigan, Washington University in St. Louis and University Hospitals.

"Each rating system had unique weaknesses that led to potential misclassification of hospital performance, ranging from inclusion of flawed measures, use of proprietary data that are not validated, and methodological decisions," they wrote.

"We found that the current hospital quality rating systems should be used cautiously as they likely often misclassify hospital performance and mislead," the piece said. "These results can offer guidance to stakeholders attempting to select a rating system for identifying top-performing hospitals."

RELATED: Leapfrog gives more than 1,000 hospitals a 'C' or worse for safety; infection control a hurdle for low-scoring facilities

Among the grades, the highest was a B given to U.S. News & World Report, while the CMS Star Rating program received a C.

The lowest grades went to Leapfrog, to which they gave a C-, and to Healthgrades, which got a D+.

Among the explanations behind their grades, the authors gave a list of pros and cons for each rating system. For instance:

  • They said U.S. News' system was least likely to misclassify a hospital and uses measures that generally have high face validity. But while the system is safety-focused, it misses other quality measures and should be more balanced, they said.
     
  • CMS' Hospital Compare's biggest strength is that it is compiled by the biggest payer in the U.S. But its information is limited in some ways. For instance, they said it contains very few elective procedures or procedure-specific measures.
     
  • Leapfrog's strengths, they said, include its focus on a culture of safety in hospitals and its use of a "scientifically rigorous composite methodology." But they questioned the validity of the use of "voluntary, self-reported survey data" from hospitals.
     
  • HealthGrades was praised for its procedure- and condition-specific rankings and for not using the Centers for Disease Control and Prevention’s National Healthcare Safety Network measures. "That is a strength because Healthgrades insists on being able to run its own analyses rather than reusing aggregated data from others." Healthgrades was questioned because "its proprietary methodology is not transparent."

The response

The article drew a quick response from the rating organizations. 

"Regarding the authors’ assessment of the U.S. News rankings, we’re gratified that the study recognized how responsive we have been to advances in measurement science and feedback from patients, doctors and other stakeholders," said Ben Harder, managing editor and chief of health analysis at U.S. News & World Report, in a statement. "The methodology changes we made this year reflect our commitment to ongoing enhancement of our rankings. The researchers also tipped their hats to our decision to make statistical adjustments for socioeconomic status and other factors to ensure fair comparisons among hospitals.”

However, Leah Binder, president and CEO of Leapfrog, pointed out the piece in Catalyst was an opinion piece, not a peer-reviewed study, and said it contained "serious errors." 

RELATED: Healthcare groups renew calls for CMS to remove Hospital Compare ratings until methodology addressed

"The ratings organizations in the piece would not allow themselves the luxury of issuing hospital ratings as random opinions, without basic rigor and transparency," Binder said in a statement. "The authors are entitled to their own opinions and it is valuable to hear their perspectives. However, they are not entitled to their own facts."

Specifically, Binder said she took issue with an assertion that Leapfrog "audits only a handful of hospitals reporting to the Leapfrog Hospital Survey" saying that is "demonstrably false."

"In addition to basic fact-checking, future iterations of this paper would have greater credibility if the majority of authors were not employed at health systems with a history of feuding with one or more of the ratings organizations they analyze," Binder said. "The piece would appear more objective without that conflict." 

A spokeswoman from Healthgrades also called the article a "highly inaccurate portrayal" of Healthgrades' hospital ratings.

"Healthgrades Hospital Quality Ratings are designed to have the greatest relevance for consumers and we work hard to make the information transparent, accessible and easy-to-understand," Mallorie Hatch, Ph.D., director of data science at Healthgrades, said in a statement.

The authors only assessed Healthgrades' overall hospital award and did not include an analysis of other service line ratings and awards which would have addressed many of the criticisms in the article, Hatch said. She said Healthgrades was given a chance to offer feedback, but corrected "inaccuracies" were not incorporated.  

RELATED: Healthgrades announces 2018 top hospitals list

"Many experts agree that clinical outcomes measures (mortality and complications) are the most important indicators of quality, as they can have the greatest impact on the overall health outcome of a patient, and frankly are the most important measures to a patient," Hatch said.

Earlier this year, CMS put out a call for feedback on ways to improve how it conducts ratings on the consumer comparison website that uses Star Ratings for hospitals to make the data more "precise and consistent" and make more direct "like-to-like" comparisons. Groups including the American Hospital Association responded with a call for major changes to how it calculates federal hospital ratings and said the ratings should be taken down in the meantime.

A CMS spokesperson said CMS is confident with the Overall Hospital Quality Star Ratings system in driving systemic improvements in care and safety even as they seek to improve the system.

"After receiving feedback from hospitals and other stakeholders through a series of listening sessions and input from a technical expert panel, CMS developed potential changes to the Star Ratings methodology, which were released for public comment this past spring," the spokesperson said. "CMS appreciates the feedback we’ve received so far from a variety of stakeholders on the Star Ratings methodology, including the work of the researchers in the study you reference and look forward to sharing improvements to the Star Ratings in the future."