Doctors Are Using ChatGPT to Improve How They Talk to Patients - The New York Times

11 months ago
55

🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame

Doctors Are Using ChatGPT to Improve How They Talk to Patients - The New York Times

On Nov. 30 last year, OpenAI released the first free version of ChatGPT. Within 72 hours, doctors were using the artificial intelligence-powered chatbot. “I was excited and amazed but, to be honest, a little bit alarmed,” said Peter Lee, the corporate vice president for research and incubations at Microsoft, which invested in OpenAI. He and other experts expected that ChatGPT and other A.I.-driven large language models could take over mundane tasks that eat up hours of doctors’ time and contribute to burnout, like writing appeals to health insurers or summarizing patient notes. They worried, though, that artificial intelligence also offered a perhaps too tempting shortcut to finding diagnoses and medical information that may be incorrect or even fabricated, a frightening prospect in a field like medicine. Most surprising to Dr. Lee, though, was a use he had not anticipated — doctors were asking ChatGPT to help them communicate with patients in a more compassionate way. In one survey, 85 percent of patients reported that a doctor’s compassion was more important than waiting time or cost. In another survey, nearly three-quarters of respondents said they had gone to doctors who were not compassionate. And a study of doctors’ conversations with the families of dying patients found that many were not empathetic. Enter chatbots, which doctors are using to find words to break bad news and express concerns about a patient’s suffering, or to just more clearly explain medical recommendations. Even Dr. Lee of Microsoft said that was a bit disconcerting. “As a patient, I’d personally feel a little weird about it,” he said. But Dr. Michael Pignone, the chairman of the department of internal medicine at the University of Texas at Austin, has no qualms about the help he and other doctors on his staff got from ChatGPT to communicate regularly with patients. He explained the issue in doctor-speak: “We were running a project on improving treatments for alcohol use disorder. How do we engage patients who have not responded to behavioral interventions?” Or, as ChatGPT might respond if you asked it to translate that: How can doctors better help patients who are drinking too much alcohol but have not stopped after talking to a therapist? He asked his team to write a script for how to talk to these patients compassionately. “A week later, no one had done it,” he said. All he had was a text his research coordinator and a social worker on the team had put together, and “that was not a true script,” he said. So Dr. Pignone tried ChatGPT, which replied instantly with all the talking points the doctors wanted. Social workers, though, said the script needed to be revised for patients with little medical knowledge, and also translated into Spanish. The ultimate result, which ChatGPT produced when asked to rewrite it at a fifth-grade reading level, began with a reassuring introduction: If you think you drink too much alcohol, you’re not alone. Many people have this problem, but there are medicines that can help you feel better and have a healthier, happier life. That was followed by a simple explanation of the pros and cons of treatment options. The team started using the script this month. Dr. Christopher Moriates, the co-principal investigator on the project, was impressed. “Doctors are famous for using language that is hard to understand or too advanced,” he said. “It is interesting to see that even words we think are easily understandable really aren’t.” The fifth-grade level script, he said, “feels more genuine.” Skeptics like Dr. Dev Dash, who is part of the data science team at Stanford Health Care, are so far underwhelmed about the prospect of large language models like ChatGPT helping doctors. In tests performed by Dr. Dash and his colleagues, they received replies that occasionally were wrong but, he said, more often were not useful or were inconsistent. If a doctor is using a chatbot to help communicate with a patient, errors could make a difficult situation worse. “I know physicians are using this,” Dr. Dash said. “I’ve heard of residents using it to guide clinical decision making. I don’t think it’s appropriate.” Some experts question whether it is necessary to turn to an A.I. program for empathetic words. “Most of us want to trust and respect...

Loading comments...