Big names in technology, including Amazon, Microsoft, and IBM, worked with healthcare industry groups to develop a standard for the use of artificial intelligence in healthcare.
Convened by the Consumer Technology Association (CTA), a working group made up of 52 organizations set out to create a common language so industry stakeholders can better understand AI technologies.
The standard, which was released Tuesday, has been accredited by the American National Standards Institute (ANSI).
The CTA working group was created a year ago to develop some standardization on definitions and characteristics of healthcare AI.
Healthcare organizations involved in the project include the American Medical Association, Doctor on Demand, Livongo, Ginger, AdvaMed, American Telemedicine Association, Fitbit, soon to be owned by Google, and Humana, the first payer to join the CTA last September.
It's part of the CTA’s new initiative on AI and is the first in several steps the CTA plans to help create a foundation for implementing medical and health care solutions built on AI.
“This standard creates a firm base for the growing use of AI in our health care—technology that will better diagnose diseases, monitor patients’ recoveries and help us all live healthier lives,” said Gary Shapiro, president and CEO, CTA. “This is a major first step—convening some of the biggest players in the digital health world—to help create a more efficient health care system and offer value-based health care to Americans.”
AI-related terms are used in different ways, leading to confusion—especially in the health care industry, including telehealth and remote patient monitoring.
The healthcare AI standard developed by the working group provides a foundation of definitions to understand AI and common terminology. The goal in creating the standard is to foster "a better understanding AI technologies and common terminology so consumers, tech companies and care providers can better communicate, develop and use AI-based health care technologies," the CTA said.
“So far, common terminology has defined the intent of use—and that’s one of the most significant challenges in developing standard applications of AI,” said Rene Quashie, VP, policy and regulatory affairs, digital health, CTA. “As health systems and providers use AI tools such as machine learning to diagnose, treat and manage disease, there’s an urgent need to understand and agree on AI concepts for consistent use. This standard does exactly that.”
Among the definitions, the standard includes debated terms such as “assistive intelligence,” which the group defined as a category of AI software that “informs” or “drives” diagnosis or clinical management of a patient with the healthcare provider making the ultimate decisions before clinical action is taken.
Other definitions include terms like de-identified data, synthetic data, remote patient monitoring, and patient decision support system.
As the healthcare system deals with clinician shortages, an aging population and the persistence of chronic diseases in the US, technologically driven solutions, such as AI, will increasingly be used to meet clinician and patient needs, the group notes.
As AI is increasingly used for decision support and decision making, healthcare professionals will need to be able to take ownership, apply judgment and empathy.
"Transparency and a common language will be key to enable the proper and safe functioning of AI," said Pat Baird, regulatory head of global software standards at Philips and co-chair of the working group.