ChatGPT's 'thinking' surprisingly resembles that of people with aphasia. That's why it sometimes fails

Researchers found that ChatGPT's processing resembles aphasia, where fluent but meaningless output occurs due to rigid internal patterns. The study may help improve AI reliability and neurological diagnostics.

ChatGPT delivers perfectly formulated answers, but they aren't always accurate. Japanese researchers have found parallels with aphasia, a language disorder where speech sounds fluent but often lacks meaning.

Using an energy landscape analysis, the team compared brain activity patterns in people with aphasia to internal data from large language models (LLMs). They discovered striking similarities in how information is processed, suggesting that AI models, like humans with aphasia, can get stuck in rigid patterns, limiting their ability to access broader knowledge.

The findings could help improve AI architecture and even serve as biomarkers for neurological conditions.

Lucas Schneider

Lucas Schneider is an acclaimed German financial journalist specializing in global markets analysis. His insightful reporting demystifies complex economic trends for mainstream audiences.

Read full bio →