Doctors Are Using ChatGPT to Improve How They Talk to Patients - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Doctors Are Using ChatGPT to Improve How They Talk to Patients - The New York Times
On Nov. 30 last year, OpenAI released the first free version of ChatGPT. Within 72 hours, doctors were using the artificial intelligence-powered chatbot. “I was excited and amazed but, to be honest, a little bit alarmed,” said Peter Lee, the corporate vice president for research and incubations at Microsoft, which invested in OpenAI. He and other experts expected that ChatGPT and other A.I.-driven large language models could take over mundane tasks that eat up hours of doctors’ time and contribute to burnout, like writing appeals to health insurers or summarizing patient notes. They worried, though, that artificial intelligence also offered a perhaps too tempting shortcut to finding diagnoses and medical information that may be incorrect or even fabricated, a frightening prospect in a field like medicine. Most surprising to Dr. Lee, though, was a use he had not anticipated — doctors were asking ChatGPT to help them communicate with patients in a more compassionate way. In one survey, 85 percent of patients reported that a doctor’s compassion was more important than waiting time or cost. In another survey, nearly three-quarters of respondents said they had gone to doctors who were not compassionate. And a study of doctors’ conversations with the families of dying patients found that many were not empathetic. Enter chatbots, which doctors are using to find words to break bad news and express concerns about a patient’s suffering, or to just more clearly explain medical recommendations. Even Dr. Lee of Microsoft said that was a bit disconcerting. “As a patient, I’d personally feel a little weird about it,” he said. But Dr. Michael Pignone, the chairman of the department of internal medicine at the University of Texas at Austin, has no qualms about the help he and other doctors on his staff got from ChatGPT to communicate regularly with patients. He explained the issue in doctor-speak: “We were running a project on improving treatments for alcohol use disorder. How do we engage patients who have not responded to behavioral interventions?” Or, as ChatGPT might respond if you asked it to translate that: How can doctors better help patients who are drinking too much alcohol but have not stopped after talking to a therapist? He asked his team to write a script for how to talk to these patients compassionately. “A week later, no one had done it,” he said. All he had was a text his research coordinator and a social worker on the team had put together, and “that was not a true script,” he said. So Dr. Pignone tried ChatGPT, which replied instantly with all the talking points the doctors wanted. Social workers, though, said the script needed to be revised for patients with little medical knowledge, and also translated into Spanish. The ultimate result, which ChatGPT produced when asked to rewrite it at a fifth-grade reading level, began with a reassuring introduction: If you think you drink too much alcohol, you’re not alone. Many people have this problem, but there are medicines that can help you feel better and have a healthier, happier life. That was followed by a simple explanation of the pros and cons of treatment options. The team started using the script this month. Dr. Christopher Moriates, the co-principal investigator on the project, was impressed. “Doctors are famous for using language that is hard to understand or too advanced,” he said. “It is interesting to see that even words we think are easily understandable really aren’t.” The fifth-grade level script, he said, “feels more genuine.” Skeptics like Dr. Dev Dash, who is part of the data science team at Stanford Health Care, are so far underwhelmed about the prospect of large language models like ChatGPT helping doctors. In tests performed by Dr. Dash and his colleagues, they received replies that occasionally were wrong but, he said, more often were not useful or were inconsistent. If a doctor is using a chatbot to help communicate with a patient, errors could make a difficult situation worse. “I know physicians are using this,” Dr. Dash said. “I’ve heard of residents using it to guide clinical decision making. I don’t think it’s appropriate.” Some experts question whether it is necessary to turn to an A.I. program for empathetic words. “Most of us want to trust and respect...
-
1:01:06
The StoneZONE with Roger Stone
4 hours agoWill Terrorists Take Down America's Power Grid? With Glenn Rhoades | The StoneZONE w/ Roger Stone
13K17 -
1:03:58
Edge of Wonder
4 hours agoAce Ventura: Mandela Detective, King Charles Portrait & Weird News
8.21K7 -
1:40:05
The Quartering
9 hours agoWhy Modern Women Suck w/Hoe_Math
32.3K35 -
1:43:26
Robert Gouveia
4 hours agoGarland SHAKING over CONTEMPT; Congressional MELTDOWN; Fani RAGES at Senate; Tish INVESTIGATED
24.1K36 -
1:48:05
2 MIKES LIVE
6 hours ago#67 2ML Open Mike Friday, we have a LOT to talk about!
9.13K2 -
2:27:40
WeAreChange
6 hours agoIt’s NOT Just The Frogs — They’re Turning All The Animals GAY??
46.6K30 -
1:02:40
In The Litter Box w/ Jewels & Catturd
23 hours agoPolice State | In the Litter Box w/ Jewels & Catturd - Ep. 570 - 5/17/2024
63.5K52 -
1:57:17
Twins Pod
6 hours agoTwins Pod - Episode 13 - Tim Kennedy: Seals Vs Green Berets, Tigers Vs Bears, & Israel Vs Palestine
58.8K25 -
2:21:43
Tucker Carlson
1 day agoDave Smith: Russia, Israel, Trump & the Swamp, Obama, and the Media Attacks on Joe Rogan
147K418 -
59:55
shaneyyricch
6 hours agoPERMANENTLY BANNED ON YOUTUBE - LETS RUMBLE
33.4K20