A risk prediction model developed in a clinical setting doesn’t necessarily work well when applied to electronic health record data, recent research shows.
In a study published this month in JAMA Cardiology, researchers from Vanderbilt University School of Medicine and elsewhere wanted to validate the atrial fibrillation (AF) risk prediction prospective model originally developed by the Cohorts for Health and Aging Research in Genomic Epidemiology-Atrial Fibrillation (CHARGE-AF) using a large number of EHRs. They used de-identified EHRs of 33,494 people ages 40 and older who followed up at clinics at Vanderbilt University Medical Center for incident AF from Dec. 31, 2005, through Dec. 31, 2010. The predictors in the model included age, race, height, weight, blood pressure and type 2 diabetes; 7.3 percent developed AF.
However, the model in the prospective cohort varied when applied to real-world electronic patient records. There was “poor calibration” in the EHR cohort, with under predicting of AF among low-risk patients, and overestimation when it came to the number high-risk individuals.
“This study highlights the difficulties of applying a risk model derived from prospective cohort studies to an EMR cohort and suggests that these AF risk prediction models be used with caution in the EMR setting,” the researchers said.
A related editorial notes that the performance of risk scores derived in trial or cohorts may vary when implemented in the EHR. If the main goal was to identify the highest risk adults, then the overestimation is okay, and individual accuracy less important, but if it was meant to determine the actual risks for individuals, the poor calibration could be problematic.
“This highlights the importance of knowing the clinical context of how a risk model will be used in assessing its utility,” the editorial's authors noted.