Stanford researchers call for ‘interim regulations’ on mental health chatbots to limit patient harm

As more people turn to their smartphone or computer to discuss mental health issues, regulators and industry leaders are faced with some difficult questions about how to adequately address the safety and efficacy of rapidly advancing chatbots.

Pointing to studies that show some users are more comfortable talking to a machine than a human, researchers at Stanford University acknowledged the possibilities of integrating technology into mental health care. But they also cautioned that the industry needs more randomized trials and testing to mitigate the risks of ineffective care.

“Safety and efficacy need to be evaluated long before conversational agents become indistinguishable from humans,” the researchers wrote in a JAMA Viewpoint.

This is particularly important for chatbots designed to assist with mental health support. Unregulated technology that violates patient expectations for privacy could exacerbate certain conditions and create widespread distrust.

And the authors noted that federal regulations for medical devices, protected health information and medical malpractice “have not evolved quickly enough to address the risk of this technological paradigm,” which underscores the need for “interim regulations to mitigate several foreseeable patient harms" and pave the way for a new approach to mental health care. 

RElATED: AI on the go—WellCare builds artificial intelligence into its mobile app

Digital health futurist Maneesh Juneja has raised concerns about the use of mental health chatbots, occasionally posting his interactions on Twitter to highlight the strengths and limitations of various software programs.

Proponents argue that chatbots can provide more access to people suffering from conditions like depression. Broadly, however, researchers have noted that mental health apps are frequently untested and effective tools are often lost in a growing number of untrustworthy apps.