I usually take electronic health record research at face value. These are generally scientific endeavors, often conducted by esteemed institutions or academicians who are testing the effectiveness of such tools, assessing their impact on satisfaction and productivity, or determining their role in research.
So it's disconcerting to read not one, but two studies this week that question the quality of some of the EHR research being conducted.
First there's the RAND Corporation's study on Meaningful Use, funded by the Office of the National Coordinator for Health IT. If you read ONC's blog post about the study, Meaningful Use is fabulous. It "improves quality, safety and efficiency outcomes." The blog post provides all sorts of statistics to support its claim. It glosses over the fact, however, that most of the studies being cited pertain only to clinical decision support and computerized physician order entry; the other components of Meaningful Use "are not as well studied."
A deeper dive into the study reveals a broader, grimmer picture. Sure, some studies have shown that Meaningful Use has provided clinical benefits. But the study's authors point out that some research on EHRs found no benefits, and in some cases negative effects on quality, safety and outcomes.
They also point out that much of the research on EHRs has been simplistic, repetitive and potentially biased, and that some results are "underreported." In other words, the research itself wasn't very meaningful.
"There has not been a commensurate increase in our understanding of the effect of health IT or how it can be used to improve health and healthcare," the authors wrote. "Study questions, research methods, and reporting of study details have not sufficiently adapted to meet the needs of clinicians, healthcare administrators, and health policymakers and are falling short of addressing the future needs of the healthcare system. ... With the increasing adoption of EHRs and other forms of health IT, it is no longer sufficient to ask whether health IT creates value; going forward, the most useful studies will help us understand how to realize value from health IT."
Then we have a new study published in the American Health Information Management Association's Perspectives in Health Information Management that exemplifies the concerns by the RAND authors about the inadequacies of some EHR studies. This one assessed how medical students' personalities and other traits, such as gender, affect how their view EHRs.
What could have been enlightening, ground breaking information turned out to be a snooze. This study found that students who were self-described as computer savvy and open minded were more likely to view EHRs as useful and easy to use. And men found EHRs a bit easier to use than women did.
Seriously? No offense, but we did not need a study to tell us that 20-somethings who are more used to computers and more open minded were more likely to view EHRs positively.
Why are only certain topics being studied? Are they easier to conduct? More likely to be published? Less controversial? And how are the topics chosen?
And if they're going to be reported, let's not underreport or slant the results. Give readers the whole picture. We can handle it.