Utah's new AI Office zeroes in on generative AI in mental healthcare

The nation’s first permanent state artificial intelligence office, established this month in Utah, wants to regulate the use of mental health chatbots.

A draft bill will likely be circulated through Utah’s state house by the end of the year.

Utah is the first state to establish a permanent office of artificial intelligence. The office is led by Zach Boyd, a Brigham Young University math professor, who is on leave to run the AI policy office. Boyd researches applications of machine learning and artificial intelligence in social sciences.

The office's statutory mandate is to protect consumers, foster innovation and observe and learn about AI. It will also put forward legislative proposals to the state house. Boyd said the office has been tracking dozens of issues the state may want to act on.

“I am mostly focused on the practical reality that Congress is going to be incredibly slow moving on this if they accomplish anything at all about it,” Boyd said in an interview. “Even if some of this may eventually be resolved at the federal level, the states are the laboratories of democracy. This is where things get tried out quickly.”

Boyd told Fierce Healthcare the office will first focus on the regulation of mental health chatbots because the state seeks to address the use of AI in healthcare, and mental health chatbots are an established, yet concerning, technology.

“Chatbots are controversial because, in some cases, they seem like they're ready to go,” Boyd said. “But we definitely know also they're, at this point, really unreliable."

Boyd said other uses of generative AI in healthcare are less controversial, such as ambient note-taking.

Utah's AI Office wants to regulate mental health chatbots used in the licensed practice of medicine, not applications like ChatGPT.

There are also swirling legal questions about the current deployment of mental health chatbots that might need to be fixed in legislation, he noted.

“Diagnosing depression is a regulated activity," Boyd said. "You need a license to do this in the state of Utah. So it's actually very possible that and maybe even probable that a company that released a bot that’s intended to do this would be engaging in unlicensed practice in the state of Utah.”

The AI office is convening stakeholders to discuss the regulation of mental health chatbots. The group of stakeholders includes small provider practices, large health systems like Intermountain Health and the University of Utah Health System, national mental health companies, startups, academics and patients.

Some of the Utah AI office’s concerns about chatbots are that they will provide inaccurate information and make mistakes that a licensed professional would not make. Licensed therapists have voiced concerns to Boyd that patients might form relationships with chatbots, which would ultimately be detrimental to them.

But the technology also has benefits, Boyd said. For example, chatbots are readily available to interact with patients during times of crisis when a provider may not be available, such as in the middle of the night. Because chatbots are more accessible than a synchronous provider visit, they could also help people seek treatment more quickly. Chatbots may also be cheaper than care by a licensed professional.

The Utah AI office will likely require clinical evidence for the use of mental health chatbots by licensed providers. Boyd also floated setting “benchmarks” for the technology that will hold up even as the technology evolves.

“The state is not ready to authorize the use of these things, without clinical evidence that they work,” Boyd said. “So I think any framework that we would propose is that someone who wants to use these tools in a regulated profession has to provide evidence to the state that those tools will work and work to an acceptable degree of reliability for them to be deployed in practice.”

Boyd said the office also wants to promote innovation and encourage good actors in the space who fear an unclear regulatory landscape. The office can grant regulatory flexibility for companies so long as the company and the office come to an agreement on parameters.

ElizaChat is the first company to apply for regulatory mitigation through the AI Office.

A regulatory mitigation agreement has not yet been reached but Boyd said he anticipates the company will begin a phased rollout with a small population of teenage students in Utah public schools.

As safety standards are met, the program will likely be able to expand, he said. Ideally, the chatbot would be free for students to use and be able to escalate situations that require a licensed professional.

“It seems like it might be the sort of innovative experiment that is worth entertaining in the state of Utah,” Boyd said.