Knowing that patients can see how well providers deliver on measures of quality care motivates large practices to work on improving quality, suggests a new study published in the journal Health Affairs.
Researchers from the Medical College of Wisconsin found a consortium of physician groups improved its performance in several of 14 care quality measures over a five-year period.
Physician groups told the researchers that publicly reported performance data "motivated them to act on some, but not all, of the quality measures," according to the study abstract. The response is most notable when the results are going to be displayed publicly, such as on the practice's website, the researchers noted.
The most significant improvement was recorded in measures related to diabetes treatment: Performance improved by double digits in three out of six measures, and by up to 9 percent in the other three measures, according to an announcement by the Medical College of Wisconsin. Performance also improved in cholesterol control and breast-cancer screening.
"Our findings show that voluntary reporting of quality measures helps drive improvement for participants, which should lead to better healthcare for our patients," said lead study author Dr. Geoffrey Lamb. "Furthermore, these results suggest that large group practices are willing to engage in quality improvement efforts in response to that public reporting."
The move toward public quality reporting is growing. In December physicians from 174 practice sites in Indiana said they would publicly post their clinical quality measure scores on a reporting website. The scores will be updated throughout the year and detail performance in areas including diabetes treatment, heart health, respiratory ailments, and women's and children's healthcare.
By publicly reporting their performance on industrywide quality measures, physician practices can also potentially counter less objective reviews in popular online doctor rating sites.
In a recent survey by the American College of Physician Executives, a majority of respondents agreed that online ratings sites are "invalid measurements of competency" and contain sampling biases. They ranked the value of those sites at slightly above 3 on a scale of 0 to 10.