The number of patients who die during a hospital stay is an easy measure of a hospital's quality--perhaps too easy, according to a new report in The British Medical Journal.
The shortcoming of hospital mortality ratios--used in the United States and United Kingdom to identify hospitals where more patients die than would be expected--is that they don't separate preventable and inevitable deaths, leading some hospitals with sicker patients to be unfairly mischaracterized as "bad," say authors Peter Pronovost, MD, PhD, professor of anesthesiology and critical care medicine at the Johns Hopkins University School of Medicine, and Richard Lilford, PhD, professor of clinical epidemiology at the University of Birmingham in England. Only one of every 20 hospital deaths in the United States is believed to be preventable, notes Pronovost.
Criticizing the use of death rates in general and their role in the castigation of England's Stafford Hospital in particular, the authors, experts in disease monitoring, recommend that officials favor measures that more accurately assess patient harms and the care being provided. For example, the rate of bloodstream infections in hospital intensive care units, which cause 31,000 deaths in U.S. hospitals each year, may be a more useful yardstick, Pronovost says.
"The goal is to say, yes, we need to be more accountable for quality of care, but we need to be scientific in how we separate hospitals of better quality from hospitals of worse quality," says Pronovost, adding that more research needs to be done into which measures most accurately assess how hospitals prevent needless deaths.