Major health plans along with health technology companies like Philips and Ginger collaborated to develop a new standard to advance trust in artificial intelligence solutions.
Convened by the Consumer Technology Association (CTA), a working group made up of 64 organizations set out to create a new standard that identifies the core requirements and baseline to determine trustworthy AI solutions in healthcare.
The standard, which was released Wednesday, has been accredited by the American National Standards Institute.
“AI is providing solutions—from diagnosing diseases to advanced remote care options—for some of health care’s most pressing challenges,” said Gary Shapiro, president and CEO of CTA. “As the U.S. health care system faces clinician shortages, chronic conditions and a deadly pandemic, it’s critical patients and health care professionals trust how these tools are developed and their intended uses.”
The CTA working group was created two years ago to develop some standardization on definitions and characteristics of healthcare AI.
Healthcare organizations involved in the project include America's Health Insurance Plans, AdvaMed, 98point6, Ginger, Philips and ResMed.
The new standard, part of CTA’s initiative on AI in healthcare, is the second in a series of standards focused on implementing medical and healthcare solutions built on AI. Last year, the CTA working group developed a standard that creates a common language so industry stakeholders can better understand AI technologies.
RELATED: Amazon, Microsoft team up with Consumer Technology Association on healthcare AI standards
The consensus-driven standard considers three expressions of how trust is created and maintained—human trust, technical trust and regulatory trust, according to the CTA.
Human trust looks specifically at topics related to human interaction and perception of the AI solution, the ability to easily explain, user experience and levels of autonomy of the AI solution.
Technical trust specifically considers topics related to data usage, including access and privacy as well as data quality and integrity—including issues of bias—and data security. This area also addresses the technical execution of the design and training of an AI system to deliver results as expected.
Regulatory trust is gained through compliance by industry based upon clear laws and regulations and information from regulatory agencies, federal and state laws and accreditation boards and international standardization frameworks.
“Establishing these pillars of trust represents a step forward in the use of AI in health care,” said Pat Baird, regulatory head of global software standards at Philips and co-chair of the working group, in a statement. “AI can help caregivers spend less time with computers, and more time with patients. In order to get there, we realized that different approaches are needed to gain the trust of different populations and AI-enabled solutions need to benefit our customers, patients and society as a whole. Collaboration across the health care ecosystem is essential to establish trust.”
RELATED: Healthcare CEOs say AI progress stymied by high costs, privacy risks
Industry efforts to provide AI oversight
Healthcare organizations are ramping up their investments in AI in response to the COVID-19 pandemic. Nearly 3 in 4 healthcare organizations surveyed expect to increase their AI funding, with executives citing making processes more efficient as the top outcome they are trying to achieve with AI, a Deloitte survey found.
A separate Optum survey found trust in AI is a significant barrier. When healthcare executives who had expressed doubt or concern about AI were asked why, 73% selected a lack of transparency in how the data are used or how the technology makes decisions, and 69% selected the role of humans in the decision-making process.
Industry stakeholders are taking steps to advance the use of AI and machine learning in healthcare.
On the regulatory front, the U.S. Food and Drug Administration (FDA) last month released its first AI and machine learning action plan, a multistep approach designed to advance the agency’s management of advanced medical software. The action plan aims to force manufacturers to be more rigorous in their evaluations, according to the FDA.
“This action plan outlines the FDA’s next steps towards furthering oversight for AI/ML-based SaMD,” said Bakul Patel, director of the Digital Health Center of Excellence in the Center for Devices and Radiological Health, in a statement. “The plan outlines a holistic approach based on total product lifecycle oversight to further the enormous potential that these technologies have to improve patient care while delivering safe and effective software functionality that improves the quality of care that patients receive. To stay current and address patient safety and improve access to these promising technologies, we anticipate that this action plan will continue to evolve over time.”
The American Medical Informatics Association (AMIA) also has recently issued some new recommendations for oversight of AI-driven clinical decision support (CDS) systems.
“An exponential growth in health data, combined with growing capacities to store and analyze such data through cloud computing and machine learning, obligates the informatics community to lead a discussion on ways to ensure safe, effective CDS in such a dynamic landscape," said Patricia Dykes, Ph.D., AMIA board chair and program director of research at the Brigham and Women’s Center for Patient Safety Research and Practice.
“The use of AI in healthcare presents clinicians and patients with opportunities to improve care in unparalleled ways,” said Carolyn Petersen, lead author and AMIA Public Policy Committee member. “Equally unparalleled is the urgency to create safeguards and oversight mechanisms for the use of machine learning-driven applications for patient care.”