Meet Dr. ChatGPT
The next iteration of Dr. Google has arrived. After decades of plugging in symptoms and scouring the internet for diagnostic clues, patients are turning to a new technology with their health questions: A.I. chatbots.
A survey last year found that about one in six adults — and a quarter of adults under 30 — regularly consult an A.I. bot like ChatGPT for medical information. To better understand why, my colleague Maggie Astor and I asked New York Times readers to share their own stories of medical consultations with chatbots.
Hundreds of people wrote in. In our conversations with them, we kept hearing versions of the same story: People aren’t getting what they need from the medical system; they say the wait times are too long, the doctors aren’t attentive, the bills are unaffordable.
Chatbots offer an alternative. There’s no waiting room, no 15-minute appointment in which you need to cram in all of your questions. The information is free, or close to it. And, because of chatbots’ relentless agreeableness, many feel like their concerns are finally being heard. (Read more in our story about the topic.)
Doctors we spoke with said they agreed that there are real flaws in the medical system. But they also said they worried about how often people seemed to be turning to chatbots — which have been known to give incomplete or entirely made-up answers — for such high-stakes decisions.
In today’s newsletter, I’ll tell you more about what we found out.
The nicer doctor
It’s not hard to see what so many people like about chatbots.
They have an encyclopedic knowledge of medical literature, sure. But so many users told us that a big part of the appeal was that the A.I. offered a kinder version of health care.
One woman asked for help diagnosing a tingly feeling in her hand that, she told ChatGPT, she suspected stemmed from an issue with her median nerve.
“You’re describing something that fits beautifully with how the median nerve runs through the forearm,” it replied.
Chatbots often wrote how sorry they were to hear about the users’ symptoms and how “great” and “important” their questions were. Sometimes, they even commiserated with users about the health system. When one woman complained that her doctor’s office had been dismissive, a chatbot offered this reassuring reply:
Image

Another woman, frustrated that her human provider wasn’t matching ChatGPT’s bedside manner, sent her oncologist a list of kind messages the bot had sent her — things she thought the doctor “should have said to me.”
Three’s a crowd
This shift has created a tricky situation. As patients turn to A.I. for a first opinion, the doctor-patient relationship is shifting from a dyad to a triad.
That’s not always a bad thing. Patients said they felt empowered to push back when they didn’t get ideal treatment, and doctors said patients who used ChatGPT often came to appointments with a clearer understanding of their conditions. Doctors also said there were times patients brought a helpful A.I. suggestion they hadn’t yet considered.
But problems can arise when patients start cutting out doctors altogether. An ethicist I spoke to recalled a recent case in which a patient was discharged from the hospital against medical advice, because her relative sided with ChatGPT’s treatment plan over what her team of doctors at Yale had proposed.
Many chatbots’ terms of service say they are not intended to provide medical advice. OpenAI and Microsoft told us they took the accuracy of health information seriously and were working with medical experts to improve their chatbots’ responses. But research has found that most models no longer display disclaimers when people ask health questions. And chatbots routinely suggest diagnoses, interpret lab results and advise on treatment.
The amount of trust placed in these models is especially worrying because we still don’t know how good they are at helping people manage their health. A study by researchers at Oxford, which has been published online but is not yet peer-reviewed, does not bode well. It found that participants using chatbots for help with a medical scenario chose to take the appropriate next steps, like whether to call an ambulance, less than half of the time.
Even so, imperfect chatbots may be better than the health care many people have access to, said Dr. Robert Wachter, the chair of the medicine department at the University of California, San Francisco. “In many cases,” he told us, “the alternative is either bad or nothing.”
(Note: The Times has sued OpenAI for copyright infringement;"
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.