Of the nearly 400,000 people in the U.S. who experience a cardiac arrest outside of a hospital setting each year, less than 6% survive, according to a report from the Institute of Medicine.
About two-thirds of out-of-hospital cardiac arrests occur in a private residence where there might not be immediate help or anyone nearby to administer CPR.
Following a cardiac arrest, each minute without treatment decreases the likelihood of surviving without disability, and survival rates depend greatly on where the cardiac arrest occurs, the study said.
But what if smart devices could be trained to detect sounds associated with cardiac arrest and then call for help?
A team of researchers at the University of Washington, including medical researchers, computer scientists, and engineers, wanted to test this idea with a focus on using digital assistants and smartphones to detect the gasping sounds, called agonal breathing, that about half of individuals make when experiencing a cardiac arrest.
"The ubiquitous nature of smart devices opens up a new way of monitoring health for relevant diagnostics," Jacob Sunshine, M.D. assistant professor of anesthesiology and pain medicine at the University of Washington School of Medicine and an author of the proof-of-concept study published in Digital Medicine. If you can develop a system to identify a characteristic diagnostic sound then that could lead to a time-sensitive intervention based on the sound, he said.
Given the ongoing adoption of smart speakers, which Gartner projects will be in 75% of U.S. households by 2020, there is an opportunity to use the microphones in these devices to identify agonal respirations as an audible diagnostic biomarker, the study said.
Estimates suggest that cardiac arrest is the third leading cause of death in the U.S. behind cancer and heart disease and strikes almost 600,000 people each year, killing the vast majority of those individuals.
To develop the tool, the researchers trained the system using audio clips of agonal breathing captured from 911 emergency calls made in King County, Washington from 2009 to 2017. They trained a machine learning model to identify agonal breathing using Amazon Alexa, Apple iPhone 5s and Samsung Galaxy S4 devices.
The research team also trained the tool on other sounds that can be heard in someone's bedroom, including snoring, hypopnea, central, and obstructive sleep apnea events, to improve the tool's accuracy and reduce false positives, the study said. That data was collected from volunteers' sleep environments and also from sleep labs.
The false positive rate was 0.2% over 236,000 audio clips of sleep data collected across 35 different bedroom environments.
"The tool was 97% accurate at detecting agonal breathing events from six meters away, which is larger than the average bedroom," Justin Chan, a Ph.D. student at the Paul G. Allen School of Computer Science and Engineering at the University of Washington and an author of the study, told Fierce Healthcare.
The tool, which the scientists call contactless cardiac arrest detection, is still in the proof-of-concept stage and there remains much work to do before it could be available for commercial use. Sunshine said researchers would like to collect more data from 911 systems to strengthen the tool's ability to detect agonal breathing.
There are opportunities to test the tool in environments where individuals are at highest risk of a cardiac arrest, such as senior living facilities, he said.
Surveys indicate that most Americans are concerned about the privacy of their conversations and personal data when using smart speakers and the use of these devices does raise some privacy concerns with regard to health monitoring.
With the development and use of this cardiac arrest-detection tool, no data was sent to the cloud, Chan said, and data is purged after a few seconds, which mitigates privacy issues, he said.