Chatbots Could Transform Medical Care — But Not in the Way You Think
“It has the potential to allow for more humane care — for the right kind of doctors.”
Experts once predicted that a disruptive invention could revolutionize healthcare by ensuring on-demand, round-the-clock care. But experts feared the new device could cause burnout in physicians and isolate patients.
The problematic technology in question? The telephone.
The debate over making medical care more accessible through technology is more than a century old — but so far, gadgets like phones have arguably offered a net benefit in enabling access to care. Now, some researchers worry that increasingly sophisticated artificial intelligence tools like OpenAI’s ChatGPT or Google’s Bard may tip the scale in the opposite direction, replacing some critical interactions between human healthcare providers and their patients.
This problem is top of mind for Latha Chandran, executive dean at the University of Miami’s Miller School of Medicine.
“How can we use ChatGPT to enhance education, to enhance clinical care?” Chandran tells Inverse. “It’s a good tool — how can we integrate that into [our work] rather than restrict its usage or fight it?”
To get ahead of the trend, medical schools like Chandran’s are considering how to make such a tool work for — and not against — healthcare providers. Ultimately, chatbots could even make healthcare more human.
Machine learning in medicine
Machine learning in medicine is not new — nor is it necessarily bad. In 2016, computer scientist Geoffrey Hinton suggested that AI could replace radiologists within five years. But clearly, we’re nowhere near a radiologist-free world in 2023.
Nowadays, many providers do harness algorithms in some aspect of their work. In 2022, 2 in 5 physicians said they planned to incorporate artificial intelligence into their practice within the next year, while nearly 1 in 5 were already using it, according to a survey by the American Medical Association.
“It’s just a beginning.”
As practices make this shift, it isn’t clear which tasks should be delegated to computers. Recent advances in the tech could allow models to outperform doctors in diagnosing conditions such as cancer and eye diseases or predicting heart attack risk — though these claims are controversial and lack strong evidence. And to many people’s surprise, ChatGPT recently passed the United States Medical Licensing Exam.
“We see that AI in certain very narrowly defined fields outperforms humans, like in pattern recognition, but this is not the major impact — it’s just a beginning,” Jörg Goldhahn, head of ETH Zürich’s Institute of Translational Medicine, tells Inverse.
AI assistants to the rescue?
One thing researchers tend to agree on: Artificial intelligence models like ChatGPT could help physicians work more efficiently — and potentially save patients money.
Engineers make models like OpenAI’s GPT-3 and GPT-4 by feeding massive amounts of data into a supercomputer — the spin-off model ChatGPT, for example, learned on a set of 300 billion parameters culled from the internet, incorporating many areas of expertise, including medical. Researchers have also created customized medical chatbots for specific scenarios, such as weight management and care for patients with Alzheimer’s.
Chatbots could assist a doctor by taking notes, setting reminders, or offering treatment suggestions. This could free up more of a physician’s attention for the patient — after all, primary care doctors can spend up to six hours a day filling out digital health records.
“That’s a huge cause of burnout for physicians,” Chandran says.
AI could also take over straightforward administrative tasks, like setting up appointments and finding health clinics for patients.
“That’s a huge cause of burnout for physicians.”
It could also help people seeking quick, basic information about their health — chatbots don’t take vacations or get sick, ensuring real-time, 24-hour responses. Some people may prefer to speak with a computer rather than a person when learning about a diagnosis they feel ashamed about, Goldhahn writes in a 2018 BMJ opinion paper.
You may, like me, regularly struggle with somewhat embarrassing podiatric ailments and be tempted to Google your symptoms and look for an answer. But rather than spend time clicking around various health websites looking for advice, a chatbot can synthesize that same information into some easy to read paragraphs almost instantly. So I asked ChatGPT: What's the best treatment for athlete’s foot?
Athlete’s foot is a fungal infection that affects the skin on the feet. It is typically characterized by a red, scaly rash and itching between the toes. The best treatment for athlete’s foot may depend on the severity and location of the infection, but over-the-counter antifungal creams, powders, and sprays can be effective in treating mild cases.
These topical treatments usually contain antifungal agents like terbinafine, clotrimazole, miconazole, or tolnaftate, which work by stopping the growth of the fungus. They should be applied as directed on the label and continued for at least 2 to 4 weeks to ensure that the infection is fully cleared.
If the infection is severe or doesn’t respond to over-the-counter treatments, a doctor may prescribe a stronger antifungal medication or an oral medication. It’s important to keep the affected area clean and dry, wear breathable shoes and socks, and avoid sharing shoes, socks, or towels with others to prevent the spread of the infection.
Thanks, doc. So can we trust ChatGPT’s advice for my foot concerns? Perhaps. (Everything the bot suggested aligned with advice from official sources.) But more complicated ailments and concerns may trip up the bot — to grave consequences.
Chatbots may be able to pass the medical licensing exam, but they are still prone to making some major errors. Take, for example, the obvious blunders from both Bing’s GPT-powered bot and Google’s Bard in public demos — the former made up stuff about vacuums, while the latter wrongly claimed that the James Webb Space Telescope took the first image of an exoplanet. When offering information on someone’s medication or symptoms, even small mistakes could have grave consequences.
And while doctors tend to operate with implicit racial biases that can harm patients of color, AI isn’t the neutral, objective solution that some tech companies claim it is. For instance, we have already seen that an algorithm used by hospitals across the country recommended follow-up care far more frequently for white patients than Black patients.
Researchers also found that an algorithm prevented kidney transplants for Black patients because it superficially assigned them healthier kidney function than white people, according to a 2020 study.
“It’s not clear why these new technologies would succeed when the old ones failed.”
Even if engineers were to program bots with specific cultural sensitivities for patient communication, algorithms can operate with baked-in prejudices created by engineers and the biased training data they use.
Sending sensitive personal information to AI bots also begs the question of data safety. While information patients share with doctors receives legal protection under the Health Insurance Portability and Accountability Act of 1996 (HIPAA), “whether the company producing the chatbot is going to be constrained by those same legislative constraints is kind of unclear at this point,” Vanessa Rampton, a historian at ETH Zurich who researches medicine and technology, tells Inverse.
It’s also possible that AI bots could take cost-effectiveness too far and suggest decisions to doctors based purely on economics, Goldhahn notes, potentially preventing treatment for some patients.
And as with other emerging technologies, some companies say that conversational AI could improve care for underserved communities — particularly where human providers are in short supply. In areas with limited medical resources, chatbots could, for example, help decide how best to distribute them among patients. But companies have made similar claims about other innovations that failed to deliver, according to Rampton. “It’s not clear why these new technologies would succeed when the old ones failed,” she says.
Beyond some straightforward, mostly administrative applications, researchers have suggested more drastic uses for chatbots, including guiding patients through end-of-life care. In this instance, a chatbot could help manage stress, ensure people are taking their medications, and offer spiritual support, according to a team of researchers at Northeastern University.
Such a task usually requires empathy and an emotional connection, skills that some argue will never be available to machines — a broader sticking point in the use of chatbots in medical settings.
“Being able to really understand how a health issue relates to the broader aspect of living a life is something that physicians — the good ones — can get in a way that machines don’t,” Rampton says.
Choosing tomorrow’s doctors
For now, nobody’s sure whether Dr. GPT or Nurse Bard will make frequent house visits or if chatbots will just serve as an occasional medical middleman. Still, Goldhahn says we can expect some major changes in the field. Depending on whom you ask, AI could radically transform the job of a primary doctor and make it easier for one provider to manage many more patients at a time — which may be helpful given that the country may face a serious shortage of them within the next decade.
Before we reach that point, experts agree that we need thorough regulations to protect patients. Broader rules do apply to algorithms that offer medical advice, but no chatbot-specific legislation currently exists.
“We have to shape it now,” Goldhahn says. “We don’t want to leave it up to the companies with their own agenda.”
These chatbot breakthroughs arrive amid an epidemic of doctor burnout induced by the Covid-19 pandemic, though Chandran doesn’t think technology can (or should) eliminate the role of human physicians. And we’ll likely get a lot more of them: In 2021, medical schools saw a 17.8 percent jump in applicants, according to the American Medical Association.
“Chatbots can, in the most ideal situation, give you all the accurate information, but they can’t connect with you, they can’t form a relationship with you,” she says. “And therefore, as long as human beings need care, I think there will be a role for caring practitioners.”
On the flip side, Chandran thinks that bringing more AI into healthcare could ease some of the job’s intellectual demands and help medical schools find future physicians who prioritize the needs of their patients. “It has the potential to allow for more humane care — for the right kind of doctors,” she says.
These are the innovations of today that will shape the world of tomorrow. Subscribe for free to Inverse’s weekly HORIZONS newsletter.