Virtual Assistants are bringing back empathy to healthcare

Things are getting better

It is said that the disruptiveness of technology is the leading factor why today’s doctors and nurses lack empathy. This is because technology encourages doctors and nurses to focus on multitasking and efficiency by looking at their monitors rather than looking at their patients. And of course, that is true to some extent, but it doesn’t change the fact that the healthcare system overload and inefficiency is the origin. Technology is advancing per diem and you can’t stop it. Today, technology has come to a point where it can start replacing humans and mimicking their actions and behavior with the power of Artificial Intelligence.

One of the advancements in digital health are chatbots and virtual assistants; they are the new layer of medical personnel that are fully digital and have a big role in patient care that will increase even more as adoption rises. However, there is a difference between chatbots and virtual assistants in healthcare. Chatbots are often for a specific purpose or for a few user story paths or used for cases such as booking an appointment with the doctor or tracking your health parameters and getting reminders. On the other hand, we can look at Virtual Assistants as advanced, very smart chatbots that provide a more comprehensive solution that can offer new possibilities in human-computer interaction powered by AI.

We are made to talk and share emotions

If we take a look at the mHealth apps that are being offered in the market currently and in the past few years from when this market exists, we can find mobile applications, software or hardware devices that are mostly based on limited user input like point and click (tapping) on a GUI (graphical user interface). Where VA’s can differentiate themselves and reach the next level in digital health – is emotion. You can’t detect emotions when a user taps on a button inside your mHealth app. Emotion can only be detected from natural input. Patients are human beings, they are natural, so they give natural input in the form of conversation and sentences in voice and text format. Humans are using language and conversations to communicate information and interact in society for thousands of years and it got entrenched in our DNA and in our brain so it is a foundational function in our everyday behavior. Communication with language is natural and normal for us, especially for certain tasks like finding out information or discussing a problem, learning, or letting someone know something. Tapping on a screen is not really natural and it limits the information that is being transferred. When you hear a person’s voice or read his or her message, you know a lot more holistically about him.

Text-based

Messaging apps today are in everyone’s hands and we are all using them on a daily basis. Billions of messages are sent every day to communicate with people and share information and conversations. Texting is the new form of communication that human beings are using due to the nature of being far away from the person they are talking to. When we talk about text, there is a big possibility of detecting emotions and other insights that are necessary for the revolutionary conversational user interface and are contrariwise just not possible with the legacy graphic. When we receive a text message from a patient, the text contains a lot of information:

  • Emotional sentiment
  • Intent or goals of the patients
  • Key pieces of information like symptoms (so-called entities in chatbot design)
  • Context
  • Emojis

These text data/information input is being analyzed with machine learning models called Natural Language Processing (NLP). NLP is one part of the big artificial intelligence hype that is happening right now in healthcare, and for a reason. Except using AI for analyzing pixels in medical imaging or other clinical data, AI can be used in this case to assist in gathering more information from the patients conversational and communication aspect. Today, healthcare is known to be cold, fast, and all about profit. So, using AI to rethink this fast and cold healthcare is great, and can happen with NLP and VA’s to push healthcare into the direction that it is deviating from daily – and that is EMOTION and TALKING WITH THE PATIENTS. Doctors don’t have time to talk with patients and also do not understand them well enough. Nurses and other medical personnel are not exempted. By using NLP, VA’s can start doing this and change the way things are done currently. Furthermore, it can make technology more user-friendly and attract users to the exact things they are lacking presently.

Although, CUI is the next step, combining it with GUI is necessary because a lot of data is shared in images, graphs, and illustrations. So, that will not be removed. Only in voice UI, there will be no GUI, but there we are talking only about a limited set of user stories that are happening in these happy special moments: patient waking up and lying in the bed, patient running or doing exercise, patient being busy with another task like cooking or driving a car or holding a baby, or patients having movement impairments so he or she can’t move his or her hands or see the screen.

Voice-based

At the moment, text messaging is a very big thing and it is increasingly becoming more important, but humans learned to write not so long ago and a lot later than they learned to speak and use their voice. Voice is entrenched DEEPER in our BASIC HUMAN BEHAVIOUR and SHOWS MORE EMOTIONS AND MORE DATA. By using machine learning models not only to convert voice to text for analysis, but also analyze the voice data itself to find out the emotion, sentiment, and intent, we can learn a lot more about what the patient wants and HOW HE OR SHE FEELS. We can bring EMPATHY back to HEALTHCARE and make hospitals more friendly with VA’s and the whole healthcare system. Detecting key information (entities) from voice data stays on the same level of effectiveness as using text data, because of the nature of what are we trying to detect: facts. But if we want to detect soft things like intent, emotion, mood, feelings, mental symptoms, then using voice is 1000x a better source of data for finding out these parameters. Today, we see smart home speakers and personal assistants like Siri, Google, Alexa, Cortana, and Bixby becoming more popular and giving big support to developers and innovators to integrate their solution with them. This will definitely grow adoption in the healthcare space. The thing that stays a challenge here is data privacy and compliance with regulations like HIPAA and GDPR. If we are using voice, we are putting ourselves more at risk to get our data and health information compromised, because voice is a form of energy wave that expands in every direction, while text is limited on being on that little screen and has limited sharing capabilities in the physical world. Sound travels better than a little letter on a screen, that’s why humans first started talking and then writing. (Although text can last longer). For those reasons, VA’s with voice capabilities will have a big role in your HOME environment or when you are wearing headphones (everyone wears headphones). The only thing that needs to be programmed is to turn it off when your friends arrive because you would feel pretty weird if your VA said on the home speakers “Hey did you take your diarrhea pills today?” while having dinner with your friends.

Conclusion

These technologies and features are very data-hungry and they will not work if there is not enough data collected from many sources and with a wide spectrum of users. However, models exist, but the data needs to be collected so that the models can be trained and maintained to be used effectively. mHealth applications are rising in adoption fast, and this will make getting to customers easier and collecting the necessary data. But people are people, and user adoption will not happen on a high enough level if startups and companies that are developing digital health solution do not realize that they need to build their products with empathy and emotion. This is because that is the real added value that can be given to the patients in the current healthcare system and that’s exactly what they NEED. Patients need EMPATHY and EMOTION, not cold data applications. There are many benefits that are coming to reality in healthcare with the advancement of AI and smart conversational interfaces. Today, doctors are becoming more and more overworked, stressed, and cannot deal with patients with enough empathy. Lifestyle and preventive medicine are becoming more and more important worldwide – and the foundation for that approach is communication based on empathy, that the healthcare system just can’t afford today.

Presently, I am working with a big team of experts on a virtual assistant that has emotions and helps keep your heart healthy. Her name is Luana.

Leave your Comment