Generative AI has dominated healthcare discourse this past year, and for good reason: hospitals are woefully understaffed, ill-prepared for spikes in demand for care and cannot afford to sit idle while their margins constrict. The universe of AI-enabled digital health solutions, each of which promise a way out of the darkness and into a bright new future of care, is growing by the day.
In some cases, these tools are making a marked impact on how clinicians work. They’re generating documentation, automating clerical tasks, and streamlining clinical workflow within the EHR. But none of them are perfect because EHR data is notoriously imperfect.
In healthcare settings like primary care, specialty care, and long-term care, imperfect works for now. While every generative AI technology is likely to produce at least the occasional hallucination, they’re ultimately saving clinicians in these settings more time than it takes to fix incorrect outputs. Clinicians in intensive care facilities, however, do not have the privilege of being able to accept imperfect AI.
In intensive care settings, the slightest errors can jeopardize outcomes. This is especially true for neonatal ICUs, where precise measurements and complex arithmetic for nutritional intervention are a life-sustaining part of care delivery.
The state of data quality in NICUs
Consider your average NICU: care delivery is fast-paced, with nearly every clinical decision being made in real time. Those decisions are already hampered by poor data quality. While EHRs contain an abundance of clinical data, the unstructured data NICU teams need is often either buried, missing from a patient chart, or riddled with typos. The human eye can correct for many of these errors; a NICU nurse is likely to know if an extra 0 has been accidentally added to a medication dosage based on a baby’s weight and is able to account for the mistake in the moment. We cannot yet trust AI to do the same.
Poor data quality in NICUs is already fueling distrust among clinicians and creating an unruly amount of rework. It’s fueling burnout among nurses who feel morally obligated to take time—a resource of which they do not have in abundance—to double-check everything they see in a patient chart.
NICUs are simply not in a place where it would be responsible to trust AI tools in any facet of patient care—assuming those tools are built on pediatric data to begin with. As noted in a recent framework of recommendations for the safe use of pediatric data in AI research, there is a lack of pediatric data in existing AI research, leading to inappropriate generalizations made to pediatric populations from adult datasets.
Contrary to the opinion that the time is now for NICUs to embrace AI to enhance decision-making and improve patient outcomes, health systems have much more reason to practice caution by shoring up the quality of their clinical data in intensive care settings. As the old saying goes, “garbage in, garbage out.”
Prioritizing data vigilance
While the allure of generative AI is strong in the face of provider burnout, what NICUs need first and foremost in order to protect the time of their staff and the lives of the babies they care for are tools and best practices that prioritize data vigilance.
At its core, data vigilance is the complete awareness of what data you need to improve clinical decision-making and optimize the workflow, how much of that data you have and how much is missing. These are all prerequisites for impactful AI, and most NICUs in the United States have a few boxes yet to check.
A recent study, for example, sought to determine changes in resources for preterm infants at 822 NICUs in the U.S. between 2009 and 2020. The study’s results were limited by missing data on entries as simple as birth NICU level, race and ethnicity, congenital anomalies, and sex. In another study, researchers determined anywhere between 13 and 91 medication errors may happen for every 100 NICU admissions—errors that happen most frequently because data as simple as time and date were not entered.
Perhaps the most glaring sign of a lack of data vigilance within NICUs is the fact that most are grappling with “alarm fatigue.” According to a 2020 study, 87.5% of alarms from monitoring systems are false, triggered by incorrect data. The phenomenon is fueling a “boy who cried wolf” dilemma among NICU nurses, who have grown so accustomed to false alarms that response times have grown longer in the event of a real critical situation.
To improve data vigilance and eventually get to a place where AI can make a meaningful impact on patient care, NICUs need improved standards and clinical decision support tools to ensure the basics—that there aren’t missing values on critical data points like timestamps, that data capture from monitoring devices or milk tracking software is appropriately synchronized, and that data is validated in real-time to prevent clinicians from chasing information long after a patient has been transferred.
If NICU teams cannot trust the data they’re looking at on a daily basis, health systems cannot trust that there is yet a role for AI in informing the care of our most vulnerable infants. The smaller the person, the more impactful errors will be. Instilling a culture of data vigilance will ensure data is correct, complete, and quality enough for AI to make a difference in neonatal care.
Tracy Warren is the CEO of Astarte Medical.