Industry Voices—Is Dr. Alexa ready to see you now?

Smart speakers have become a ubiquitous part of our lives. In fact, by 2022 analysts expect that voice technology will reach 55% of U.S. households. Whether it’s a gut check on the day’s forecast or it’s time to place a grocery order, consumers are accustomed to saying, “Hey, Alexa,” and receiving an answer in mere seconds. Now, with Amazon Alexa becoming HIPAA-compliant, voice technology’s intersection with modern healthcare is poised to explode.

The goal of a HIPPA-compliant Alexa is to give consumers the opportunity to ask a voice assistant questions about their health, refill a prescription or make an appointment with their provider—ushering in a whole new era of patient experience.

It’s no secret that the U.S. healthcare system today is confusing. The global lack of access to essential health services further exacerbates the situation, with at least 400 million people worldwide facing obstacles preventing them from receiving treatment. A harmless condition can become serious if a person doesn’t have access to the right resources, information or tools to help follow their treatment plan.

While the shift to voice-activated technology in healthcare can reduce common barriers to care, there are still significant challenges for tech titans to overcome. Security and compliance are still top concerns, but more practical limitations must be addressed too before voice technology can become mainstream and viable in the healthcare ecosystem.

RELATED: Doctor Alexa will see you now: Is Amazon primed to come to your rescue?

Can you hear me now?

When most of us interact with voice assistants, we do so in our homes—a relatively quiet, contained location. Voice assistants work well in the confines of our home for just that reason: There is little distraction in quiet environments.

Now imagine a voice assistant in a patient’s room, located in a busy hospital. If the voice assistant detects music or audio from a TV, overhead announcements or simple ambient noises or other speakers that are all too common in a clinical setting, the ability of the device to correctly and accurately pick up voice interactions diminishes.

Laura vs. Lauren vs. Laurie

Being able to correctly identify and authenticate the speaker when accepting voice commands is mission-critical when voice technology is handling confidential patient information. Voice assistants, like Alexa, work best when associated with your account because they have personal information and preferences saved.

While it is relatively easy to authenticate yourself at home or in the presence of family, the task becomes more difficult in a public setting such as a hospital or doctor's office. Having to divulge your identifiers audibly for all within earshot is a challenge—and one that compromises personal privacy.

If a voice assistant can’t authenticate that it’s in fact “Laura,” not “Lauren,” requesting her healthcare information, it will be difficult for voice technology to perform the sensitive function it’s poised to do. As the technology becomes more prevalent in the healthcare setting, a model that balances personal privacy and convenience will be important.

Understanding the difference between echinosis and ecchymosis

Having an unstructured conversation with Alexa is difficult. Most interactions are based on highly anticipated responses—think “20 questions.” Is it an animal, mineral or vegetable? It’s easy to plan for those interactions programmatically and anticipate how Alexa should respond. Real, unstructured interactions are far more complex. The permutations of ways that a user can provide information all have to be considered.

To add another layer of complexity, clear voice communication in a noisy and busy clinical setting—particularly when complex medical vocabulary is required—can be tricky for voice assistants to master. Terms like “echinosis” and “ecchymosis” sound similar but mean two very different things. For voice assistants to be helpful in evaluating medical diagnosis and dosages, the software needs to have sophisticated conversational skills.

RELATED: Analysis: Why Alexa's bedside manner is bad for healthcare

These skills will improve with advances in machine learning, and the engagement will not need to be as explicit. Healthcare interactions can be complex. Knowing when to branch out from a logic-based response or how to handle an unexpected event are also capabilities that need to evolve.

Voice technology building blocks and skills

Writing good software for a streamlined user interface is difficult. The reality is that voice-activated user experiences are a whole new way of thinking for most engineers. If you approach voice technology with limited to no background in voice software, you end up with a recipe for a vast disparity in the quality and utility of the voice assistant.

The challenge is less for Amazon Alexa and more for hospitals, health systems, care providers and the developers that build for them. Developers must ensure they make experiences that are easy, safe and intuitive. Do it well, and you’ll gain fans. Do it poorly, and you’ll find users running from the technology.

Alexa’s HIPAA compliance opens the floodgates of innovation for integrating voice-related technology into the healthcare sector. But like with any new technology, there’s a lot of work ahead to make voice assistants mainstream and viable in a care setting.

Robin Cavanaugh is the chief technology officer of GetWellNetwork.