ChatGPT's 'thinking' surprisingly resembles that of people with aphasia. That's why it sometimes fails

Researchers found that ChatGPT's processing resembles aphasia, where fluent but meaningless output occurs due to rigid internal patterns. The study may help improve AI reliability and neurological diagnostics.

chatgpt-aphasia-ai-neurology
Facebook X LinkedIn Bluesky WhatsApp

ChatGPT delivers perfectly formulated answers, but they aren't always accurate. Japanese researchers have found parallels with aphasia, a language disorder where speech sounds fluent but often lacks meaning.

Using an energy landscape analysis, the team compared brain activity patterns in people with aphasia to internal data from large language models (LLMs). They discovered striking similarities in how information is processed, suggesting that AI models, like humans with aphasia, can get stuck in rigid patterns, limiting their ability to access broader knowledge.

The findings could help improve AI architecture and even serve as biomarkers for neurological conditions.

Related

google-ai-chatbots-69-accurate-flaws
Ai

Google Study: AI Chatbots Only 69% Accurate, Reveals Major Flaws

Google's FACTS Benchmark reveals AI chatbots are only 69% accurate, with multimodal understanding scoring below 50%....

chatbot-accuracy-consumer-trust
Ai

Chatbot Accuracy Study Reveals Consumer Trust Challenges

New research shows chatbots need psychological design, not just better AI, to build consumer trust. Simple empathy...

ai-imagination-neuroscience
Ai

New AI system outperforms others and has something special: imagination

A new AI system called Co4 has been developed, mimicking human brain functions and outperforming existing models...

ai-chatbots-social-norms
Ai

AI Can Spontaneously Develop Social Norms Without Human Intervention: First Step Toward an AI Society?

AI chatbots can spontaneously develop social norms through interaction, mimicking human societal behaviors,...

chatgpt-aphasia-ai-neurology
Ai

ChatGPT's 'thinking' surprisingly resembles that of people with aphasia. That's why it sometimes fails

Researchers found that ChatGPT's processing resembles aphasia, where fluent but meaningless output occurs due to...