Select Page

Since the start of the pandemic, the face of healthcare has changed dramatically. The need for remote care has driven many new innovations and exciting technologies like artificial intelligence (AI) are getting to the point of revolutionizing how healthcare is delivered. One of the innovations being explored in relation to the latter is conversational AI, or voice technology. While voice technology is still quite new, it is already making big waves in healthcare as it gets adapted quickly in the home and in vehicles through services like Alexa and Siri.

The Conversational AI Market in the Healthcare Setting

Conversational AI makes it possible for people to interact with systems using their voice. Within the context of healthcare, this technology is moving more into the forefront of the clinician and patient experience. Using this technology, it is possible to significantly reduce the administrative burden faced by clinicians while also improving their relationship with patients. With this tool, clinicians can enable computers to make a record of the encounter without the need to type notes in front of patients or spend hours after appointments filling out charts. Instead, the clinician can focus on the conversation at hand.

Currently, the conversational AI market is growing at a very fast rate, and many of the top companies are starting to focus on healthcare applications. For example, Google Cloud, Amazon Web Services, and Microsoft are all exploring applications of technology and voice in the healthcare setting while also building partnerships with clinics to pilot new systems. Outside of these major names, a large number of startups are also becoming involved with conversational AI and hoping to carve out specific niches. These companies have a lot of hurdles to overcome, especially because healthcare demands absolute accuracy. Dealing with dialects, accents, clinical jargon, and more all present various challenges.

How Voice AI Can Relieve Pressure on Providers

One of the companies that has made the most progress in terms of conversational AI in healthcare is Nuance Communications, which is behind Dragon Medical One. This company enables dictation directly into the electronic health record for clinicians. About five years ago, the company pivoted from focusing on dictation to taking the technology a step further and creating a system that can listen to a two-party conversation and translate it into a clinical note. The company calls this technology ambient clinical intelligence (ACI). Such a system would radically reduce the amount of time clinicians spend on the computer and give them more time for patient care.

This year, Microsoft acquired Nuance. With the backing of Microsoft, the new ACI system is already being piloted in a small hospital system in Pennsylvania. Feedback on the system has been good so far. The system is able to create nuanced notes even in especially challenging appointments, such as with a patient who has dementia. In addition, the system has been very good at distinguishing medical lingo. For example, the system knows the difference between cabbage and a coronary artery bypass graft (CABG), which are pronounced the same way. For their part, clinicians have been pleasantly surprised at the level of care they can deliver when they focus all of their attention on the patient.

Voice Commands in the Operating Room and Beyond

Another company that has already begun trialing solutions is Amazon. An academic medical center in Houston is exploring how conversational AI can be implemented in the operating room with technology from Amazon. The system allows voice commands to be used to interact with the electronic health record and some other clinical applications. Voice commands can be used to set timers for vital alerts and maintain other records that are important when performing a surgical procedure. Since surgeries happen in a sterile field, removing the need to interact with the computer helps to speed up procedures while eliminating a lot of the clerical work involved.

Microphones in the operating room can capture voices, which are then processed in the cloud. Initial testing for the technology has taken place in a simulation center. Alongside this technology, the developers are also working on ambient listening technology for patient exam rooms. This sort of technology is very similar to the Nuance product. The system in Houston uses microphones in the room controlled by a smartphone, tablet, or computer that use the cloud to transcribe, process, and index conversations. The cloud uses Amazon Comprehend Medical, which is a HIPAA compliant service, to parse the medical terminology. After the visits, summaries of the interaction are emailed to both the patient and the clinician with the latter receiving a progress note for review. 

The Future of Voice Recognition in Healthcare

We are still quite far from seeing this technology implemented in hospitals and clinics across the country. Integrating this technology into a workflow will take time and patience. While the systems need to develop further in terms of understanding accents and dialects, there is still a lot of cultural translation that happens in medical settings, and figuring out how to do this effectively with conversational AI will also take time. The implementation is promising, but there is a lot of work to be done before the product can be widely adopted. At that point, getting physician buy-in will also take a considerable amount of time and effort. However, this product could make a trip to the doctor’s office or hospital a very different experience down the line.