Skip to main content

Market Overview

ChatGPT More Empathetic Than Real Doctors, Says Study, Amid Worries About Hallucinations

Share:
ChatGPT More Empathetic Than Real Doctors, Says Study, Amid Worries About Hallucinations

Alphabet Inc.’s (NASDAQ:GOOG) (NASDAQ:GOOGL) CEO Sundar PichaiOpenAI’s co-founder and CEO Sam Altman and other tech experts might be worried about chatGPT’s hallucinations, but a new study suggests that the chatbot is far more empathetic than actual doctors. 

What Happened: The University of CaliforniaSan DiegoJohn Hopkins and other universities collaborated to conduct a study where OpenAI’s chatGPT was posed with 195 questions extracted from the AskDocs subreddit. 

The research team, consisting of healthcare professionals specializing in oncology, infectious disease, pediatrics, and internal medicine, scored the responses of both the chatbot and verified physicians from Reddit based on a five-point scale, evaluating “quality of information” and “empathy or bedside manner” provided. 

See Also: Say Hi To ChatGPT’ Incognito Mode:’ Open AI Strives To Give Users More Data Control

According to the study’s results, the clinicians favored the chatbot’s responses in 78.6% of the 585 scenarios, with the bot receiving scores 3.6 times higher in quality and 9.8 times higher in empathy compared to human physicians.

The study only considered physicians’ responses because they expect their answers to be generally superior to other healthcare professionals or laypersons. 

“We do not know how chatbots will perform responding to patient questions in a clinical setting, yet the present study should motivate research into the adoption of AI assistants for messaging, despite being previously overlooked,” the study stated. 

Why It’s Important: Despite widespread knowledge of the limitations of AI chatbots, including their propensity for hallucinations and jailbreaks, numerous individuals remain overconfident in the abilities of ChatGPT and overlook the potential risks associated with seeking advice or information from such chatbots.

While the study results are optimistic, users need to understand that even GPT-4, the latest chatGPT model, can make mistakes and misdiagnose — the primary reason why doctors are cautious about letting the chatbot loose on its own. 

For the unversed, hallucination in the AI ecosystem refers to chatbots giving confident answers that don’t seem justified by its training data. 

Check out more of Benzinga’s Consumer Tech coverage by following this link.

Read Next: Samsung Reportedly Bans OpenAI’s ChatGPT, Google Bard, Bing AI: What Are They Afraid Of?

 

Related Articles (MSFT + GOOGL)

View Comments and Join the Discussion!

Posted-In: Artificial Inteliigence ChatGPT Consumer Tech doctorsNews Tech

Don't Miss Any Updates!
News Directly in Your Inbox
Subscribe to:
Benzinga Premarket Activity
Get pre-market outlook, mid-day update and after-market roundup emails in your inbox.
Market in 5 Minutes
Everything you need to know about the market - quick & easy.
Fintech Focus
A daily collection of all things fintech, interesting developments and market updates.
SPAC
Everything you need to know about the latest SPAC news.
Thank You

Thank you for subscribing! If you have any questions feel free to call us at 1-877-440-ZING or email us at vipaccounts@benzinga.com