Using claims data for doctor ratings poses big problems


In theory, physician ratings and rankings aren't as scary as they used to be a few years ago.

With activists like New York Attorney General Andrew Cuomo helping to spark systematic changes in such programs, health plans aren't shooting from the hip as much as they were before. Not only do health plans have to "behave" in New York, they're taking the new standards with them to their entire commercial populations (such as Cigna, which brought in the NCQA to oversee its entire ranking system).

Not only that, it's looking like some big stakeholders are starting to agree on what they want to measure. According to a recent announcement from the oh-so-respectable Robert Wood Johnson Foundation, a group of leading stakeholders in provider rating including the Leapfrog Group, the National Business Coalition on Health, the AFL-CIO, the American Medical Association and America's Health Insurance Plans have all agreed on a "Patient Charter" that sets principles for measuring doctors' performance. That ought to help, right?

Still, not everybody's happy, despite the progress that's been made to date. For example, just consider the Massachusetts Medical Society, which is suing over rankings it says are capricious and actively defraud consumers. It's main beef is that the claims data health plans are using to rank its members just don't cut it.

And there, folks, is one of the most stubborn obstacles to straightening this mess out. Let's say that the industry's leaders all agree on what's fair and reasonable to measure clinically, and that they reach some grudging consensus that cost is an issue worth using as some form of ratings scale. Given the extraordinarily complex, flawed and cryptic puzzle that is claims data, it seems unlikely that anyone can use it to make fair deductions about physician performance on a granular level.

OK, I realize that some of you will blanch when you read that, but I have to tell you that the preponderance of what I've read and seen suggests that this is true--if for no other reason than that in attempting to do a good job and pay fairly, the system is breaking under its own weight. (If you think I'm way off base, please feel free to write and tell me so. Some of these folks might be first in line.)

Look, it seems pretty clear that some form of ranking will arrive at some point. But in the meantime, it just isn't going to work to force quality numbers onto a data set that arrives at the door broken. Maybe, when we have a fully-implemented electronic medical records universe out there, we can rethink the whole thing. For now, perhaps health plans can come up with some other way of motivating doctors to play ball? - Anne