Skip to main content

Market Overview

Cambridge Dictionary's Word Of The Year 'Hallucinate' Highlights AI Big Problem

Share:
Cambridge Dictionary's Word Of The Year 'Hallucinate' Highlights AI Big Problem

The Cambridge Dictionary has named ‘hallucinate’ as its Word of the Year, indicative of a significant shift within the artificial intelligence (AI) industry.

What Happened: ‘Hallucinate’ has been given a fresh interpretation within the AI field, reported Business Insider. The term now extends beyond the perception of non-existent phenomena, referring to instances when AI falsely portrays erroneous data as accurate, occasionally leading to detrimental results.

Well-known platforms like Gizmodo, CNET, and Microsoft have been criticized for inaccuracies in their AI-produced articles. A legal professional even lost his job after an AI-based chatbot, ChatGPT, created non-existent lawsuits as references.

See Also: ‘Deeply Regret My Participation’: OpenAI’s Chief Scientist Who Voted To Oust Altman Vows To ‘Reunite’ The Company Amid Staff Rebellion

Earlier this year, Morgan Stanley analysts highlighted the propensity of ChatGPT to fabricate facts, a problem they predict will persist for some years. This issue has prompted concerns among business leaders and misinformation experts about AI’s potential to heighten online misinformation.

Wendalyn Nichols, the publishing manager of the Cambridge Dictionary, underscored the importance of human critical thinking when using AI tools, stating, “The fact that AIs can ‘hallucinate’ reminds us that humans still need to bring their critical thinking skills to the use of these tools.”

Why It Matters: The term ‘hallucinate’ has become a focal point for tech experts and internet users alike since OpenAI released its revolutionary chatbot technology, ChatGPT in April 2023. Alphabet Inc. CEO Sundar Pichai even acknowledged that the AI industry is grappling with “hallucination problems” with no clear solution.

In a move towards self-regulation and responsible AI, Large Language Model (LLM) builder Vectara released its open-source Hallucination Evaluation Model in November 2023. This model aims to quantify the degree to which an LLM deviates from facts, marking a crucial step towards removing hindrances to enterprise adoption and mitigating risks such as misinformation.

Read Next: Sam Altman Was Not Fired From OpenAI For AI Safety Concerns

Image Via Shutterstock


Engineered by
Benzinga Neuro, Edited by


Pooja Rajkumari


The GPT-4-based Benzinga Neuro content generation system exploits the
extensive Benzinga Ecosystem, including native data, APIs, and more to
create comprehensive and timely stories for you.
Learn more.


 

Related Articles

View Comments and Join the Discussion!

Posted-In: Artificial Inteligence Cambridge Dictionary Consumer Tech Hallucinate hallucinationsNews Tech General

Don't Miss Any Updates!
News Directly in Your Inbox
Subscribe to:
Benzinga Premarket Activity
Get pre-market outlook, mid-day update and after-market roundup emails in your inbox.
Market in 5 Minutes
Everything you need to know about the market - quick & easy.
Fintech Focus
A daily collection of all things fintech, interesting developments and market updates.
SPAC
Everything you need to know about the latest SPAC news.
Thank You

Thank you for subscribing! If you have any questions feel free to call us at 1-877-440-ZING or email us at vipaccounts@benzinga.com