Throughout the history of medicine, it has usually been presumed that the majority of clinical decisions made by a physician were correct. Of course, physicians made mistakes, but these missteps are not the focus of a discussion on unexplained clinical variation. Rather, what is to be examined is whether the clinical decisions that are made with respect to individual patients with similar presentations are consistent between doctors. If there are times when, for a given clinical entity and presentation, there is a “better” way to treat the patient (i.e., safer, more effective, less resource intensive), do the vast majority of physicians reach a similar plan of care? An analogy would be that for a given airplane, in a given set of conditions, if there is a better way (e.g., safer) to fly a plane, then we would expect that well-trained pilots would reach similar conclusions as to how to fly the plane. This is not to say that there might not be more than one “right answer”, but for important inherently categorical decisions (e.g., speed up, slow down) we could be confident that the pilots are experts, and will make the right decisions.
While the primary focus should certainly be clinical harm, it is likewise important to not shrink from the topic of costs. Excess costs are also a patient harm. The harm may be individual, in that a patient will be asked to pay for a treatment/procedure he or she didn’t need (and endure financial strain/harm), or it can be viewed as collective, as ever-rising healthcare costs can do damage to the nation as a whole. While a full examination of the effects of individual and collective financial harm is beyond the scope of this discussion, it can be stipulated, by way of example, that there are patients who experience dramatic negative impacts (e.g., bankruptcy) secondary to treatment costs, and that ever-escalating healthcare costs place a strain on government budgets (e.g., states’ Medicaid budgets), and are a burden on employers (increasing costs of providing health insurance to employees).
An examination of Medicare fee-for-service data encompassing 22.6 million emergency department (ED) visits found, after risk adjustment, that admission (to inpatient care) rates varied significantly in a county-by-county analysis. Looking at the 20th percentile and the 80th percentile, the admission rate varied two-fold, translating to a different decision (admit vs. not admit) every fifth patient.1 Even if one allows that risk adjustment may not have been complete, it is hard to justify such a wide discrepancy on any remaining unaccounted for risk factors. One must consider not only the cost but also the documented risks associated with hospitalization.2,3 This sort of risk-adjusted admission rate variation has been replicated many times, including when examining a single diagnosis.4
Another realm of unexplained clinical variation concerns the performance of procedures, especially when the procedure in question is elective. An examination of cardiac angiography and percutaneous coronary intervention (PCI) performance across 544 hospitals and 1.2 million angiograms found troubling results.5 In nearly 20% of hospitals over 40% of angiograms were performed on asymptomatic patients, a group of patients for whom angiography is indicated relatively rarely. In addition to this seemingly rather dramatic overuse, this percentage varied significantly. In approximately 20% of hospitals, this percentage was less than 15%. Similarly, and perhaps even more condemning, the proportion of PCI procedures judged as inappropriate by the prevailing American College of Cardiology guidelines was both shockingly high at some facilities and also varied substantially. In 25% of hospitals, over 30% of PCI procedures were inappropriate. There were a dozen hospitals for which this percentage was over 50%. Likewise, there were many hospitals at which less than 10% of PCI procedures were inappropriate.
At least equally troubling are the results from a study including over 115,000 men with low-risk prostate cancer.6 At the time of their diagnosis (and currently) ‘active surveillance’ (no immediate surgery) was an acceptable treatment option. Bearing in mind the significant side effects of surgery and other treatments (e.g., urinary incontinence, erectile dysfunction), this is the sort of treatment decision that should very much depend on the beliefs and preferences of each individual patient as they weigh the pros and cons of each option. How disconcerting then the authors’ conclusion, “Our finding of significant variation in inter-institutional practice suggests that the institution at which one is treated matters as much as, if not more than, one’s health status and beliefs regarding treatment decisions.”6 There was unexplained clinical variation, after case-mix adjustment, by geographic region, but the variation was even more dependent upon the individual facility/set of doctors from which the patient received advice and treatment. The authors stated, “These results suggest that physicians must do a better job of overcoming their ingrained tendencies by constantly assimilating evolving treatment algorithms into their practice patterns.”6
Beyond acknowledgment that this sort of unexplained clinical variation exists, what can be done? One way to address this problem is the use of evidence-based standards, such as MCG care guidelines, to inform clinicians on current research and best practices. The jumping off point for identification of where to focus efforts in analyzing a given population (i.e. commercial or Medicare) can be supported using data science tools such as MCG Benchmarks and Data. You can learn more about MCG Benchmarks and Data in this video.
Another example of “bumper rails” to restrain clinical practice is the use of antibiotic stewardship programs. A systematic review and meta-analysis of randomized controlled trials concludes that such programs increased antibiotic utilization consistent with published guidelines, reduced inpatient length of stay, and shortened the duration of antibiotic treatment.7
Just as a pilot has to justify specific actions and decisions, so too should an interventional cardiologist be able to reasonably defend and account for a decision to perform PCI. Clinical practice is not expected to be uniform. Often, it is not very clear which path is the “right one”. But conversely, there is the unexplained clinical variation that can produce harm, and there are times when a given clinical decision ought to be subject to some manner of review and need for justification.
- Dr. Bill Rifkin, Managing Editor and Physician Relations Specialist, MCG Health
Image courtesy Shutterstock/ronstik
- Caines K, Shoff C, Bott DM, Pines JM. County-level variation in emergency department admission rates among US Medicare beneficiaries. Annals of Emergency Medicine. 2016; 68: 456-460.
- James JT. A new evidence-based estimate of patient harms associated with hospital care. Journal of Patient Safety. 2013; 9: 122-128.
- Makary MA, Daniel M. Medical error — the third leading cause of death in the US. British Medical Journal. 2016; 353: i2139.
- Freeman JV, et. al. National trends in atrial fibrillation hospitalization, readmission, and mortality for Medicare beneficiaries, 1999-2013. Circulation. 2017; 135: 1227-1239.
- Bradley SM, et. al. Patient selection for diagnostic coronary angiography and hospital-level percutaneous coronary intervention appropriateness. JAMA Internal Medicine. 2014; 174: 1630-1639.
- Loppenberg B, et. al. Variation in the use of active surveillance for low-risk prostate cancer. Cancer. 2018; 124: 55-64.
- Davey P, et. al. Interventions to improve antibiotic prescribing practices for hospital inpatients. Cochrane Database of Systematic Reviews. 2017, February 9, issue 2.