Omni-present and non-judgmental, ‘chatbots’ allow patients to open up
July 3, 2018
TORONTO – Dr. John Reeves opened the HealthBot 2018 conference with a question: How many in the audience had been at HealthBot 2017? No one raised a hand.
“That’s because it didn’t exist,” Reeves chuckled.
But interest in chatbots – software that simulates human conversation through the use of AI technologies like natural language processing, big data management and pattern recognition – is growing rapidly.
HealthBot 2018, examining the role of ‘chatbots’ or ‘bots’ in healthcare, had a last-minute change of venue when free registrations outgrew the original space at MaRS Discovery District, which borders the main campus of the University of Toronto. Organizers shifted the show around the corner to Mt. Sinai Hospital’s 18th floor Ben Sadowski Auditorium, a 230-seat lecture-style theatre.
“We thought we might have about 50 people,” said Reeves, partner and chief medical officer of Conversation Health, a Toronto-based digital startup that develops bots for health industry clients. Conversation Health assembled a cast of bot-expert speakers in the healthcare, marketing, and user experience fields to introduce attendees to the whys and hows of bot-building in a medical context.
Many of us have already encountered bots – such as when we’re shopping online and a little head pops up and asks if we need any help.
These little talking heads, or bots, are so lifelike and natural, it’s easy to think that a human is actually operating behind the scenes.
In reality, they’re AI-powered computer programs. These bots just get better and better at answering our questions and responding to our needs.
A major part of the process of designing bots is through interaction with actual users – and in the case of the healthcare system, with patients. Patrick Glinski, a senior vice-president with Idea Couture, who oversees the digital strategy house’s healthcare practice, notes that it’s all about designing conversations that matter.
“The complexity of human experience is infinite,” he said. But often, users aren’t consulted in the design of a technological project. “We always seem to forget to do that.”
He recalls an ethnographic study he conducted on psoriasis. Product developers thought a patient roundtable would be about constant itchiness, having to apply creams, the symptomatic signs of the illness. They got a surprise.
“Pretty much everyone at the table said they’d thought about killing themselves in the last 48 hours,” he said.
Bots are dealing with people who happen to be patients, not the reverse. And the value, desirability and usability of a technology changes along with the level of experience the user has with his or her condition, he said.
“These are all deeply intertwined,” he said. “People lead very complex lives.”
Given the considerations, building a healthcare chatbot is a daunting task. “It’s really easy to deliver a bad chatbot,” said Lexi Kaplin, chief product officer for Conversation Health.
But properly designed, bots can be invaluable. They’re uniquely positioned, said Glinski: omnipresent, and crucially, non-judgmental. Studies have shown users are willing to tell chatbots things they’re hesitant to tell a doctor, so a richer history can be collected.
And the best will evoke a genuine persona for the user. An NHS stop-smoking bot was so effective, “it had people telling it their deepest, darkest secrets,” Patel said. “They thought it was a human being and started talking to it like a psychiatrist.”
Health bots aren’t new, noted Shwen Gwee, general manager of Novartis AG’s digital accelerator in Cambridge, Mass., and former head of digital strategy at biotechnology company Biogen Inc.
The first dates back to 1966, when Joseph Weizenbaum of the Massachusetts Institute of Technology Artificial Intelligence Lab created ELIZA, which used rules and scripts to parody a Rogerian (self-actualizing) psychotherapist in interactions with people. Strip away the sophisticated interface, and today’s bots are essentially the same, Gwee said.
Both are built on decision trees and scripted responses – an ideal fit for healthcare. “That’s how healthcare works,” Gwee said: Decision trees are an integral part of the medical process, and the tightly regulated healthcare industry thrives on pre-approved scripting.
Newer AI technologies also fit the mold: machine learning, wherein software hones its own algorithms through interaction, rather than relying on engineered improvements, is based on the kind of pattern recognition that experienced doctors rely on for differential diagnoses.
With healthcare budgets and resources stretched to the limit, bots are being used to complement staff delivery in a number of ways.
Brite Health helps patients in clinical trials keep on top of medication and appointments; U.K.’s Babylon Health has started a six-month trial replacing the National Health Service’s (NHS) 1-1-1 help line with a “triage chatbot” to advise callers on urgent, but not life-threatening, after-hours situations.
And GRiST (Galatean Risk and Safety Tool) Gaia, also developed in the U.K., is an online psychiatric assessment tool built on cognitive behavioural therapy that recommends support services.
“At some point, bots have to come to reality and solve big business problems, which we know we have in healthcare,” said program moderator Ritesh Patel, chief development officer with Ogilvy Health and Wellness in New York. Increasingly, said Patel, that’s online.
In fact, a prime role for healthcare chatbots is to counter online misinformation – to combat “Dr. Google,” Patel said. Of the NHS helpline, Patel said, “People were calling and saying, ‘I’ve Googled my symptoms and I’m dying, I need to see someone now.’”