Not all EHR research can be taken at face value

Electronic health record research is typically scientific, evidence-based, reliable and authoritative.

But sometimes a study seems to miss the mark.

Case in point: the venerable Journal of the American College of Cardiology has just published a new study that found that EHR-using hospitals did not have higher quality of care for Ischemic stroke patients than hospitals using paper records. Such facilities also did not have lower in-hospital mortality rates for these patients. And this was a big study, reviewing 626,473 Ischemic stroke patients at 1,236 hospitals over a three-year period.

The authors of the study used the results as an opportunity to conclude that "EHRs do not appear to be sufficient, at least as currently implemented, to improve overall quality-of-care or outcomes for this important disease state."

In an accompanying editorial, the chief of cardiology of an academic medical center stated that EHRs were not meeting the government's "triple aim" of improving efficiency, care or population health; EHRs' first priority, he said, must support clinical care, not documentation for billing and reimbursement. He further said the stroke study is a "wake-up call that we should heed."

I agree that EHRs are imperfect. They need to be designed and used better to improve safety, efficiency and usefulness.

But the stroke study itself does not support those statements very well. In fact, there are some mighty big holes that arguably undermine its conclusions, I'm sorry to say.

First, the study only included hospitals using a particular tool, called Get with the Guidelines-Stroke. So they were all following the same particular clinical guideline for the treatment of Ischemic stroke patients. Wouldn't that indicate that the guideline was the driving force behind the data, and not the EHR?  Wouldn't you want all of the hospitals following one particular guideline to have the same treatment outcomes regardless of the type of medical record used? The researchers even acknowledged that since the hospitals were all using this tool, this may have "diluted" the effect of any EHR tools that may have been available.

Second, the research was based on data from 2007 to 2010. That means that the most recent data was already five years old. That's before the Meaningful Use even started, before EHRs needed to contain certain minimum functionalities to be certified. The EHR-using hospitals in the stroke study were probably using EHRs merely as electronic paper records. So why would anyone expect their results to be better?   

Moreover, data that's five to eight years old is not very relevant, especially in an industry that's evolving as quickly as health IT. Was there no way to include more current information? Even the Centers for Disease Control and Prevention (CDC), not always known for using current data, doesn't use information that old.

Perhaps coincidentally, the CDC reported just last week that EHRs help assess the progress made in controlling hypertension, a major cause of strokes. In a related blog post, the heads of the Million Hearts initiative reiterated that while the CDC report shows promise in improving health, that  EHRs are merely tools, and aren't to be taken "in isolation."

This is not the first time I've pointed out that studies on EHRs aren't always helpful. But at the least, we should be able to take a study at face value and not feel misled by it.

This is 2015. We have a better handle on the capability of EHRs, their promise and their limitations. If a study's message is meant to be taken seriously, the research should reflect that. - Marla (@MarlaHirsch and @FierceHealthIT)