ChatGPT delivers perfectly formulated answers, but they aren't always accurate. Japanese researchers have found parallels with aphasia, a language disorder where speech sounds fluent but often lacks meaning.
Using an energy landscape analysis, the team compared brain activity patterns in people with aphasia to internal data from large language models (LLMs). They discovered striking similarities in how information is processed, suggesting that AI models, like humans with aphasia, can get stuck in rigid patterns, limiting their ability to access broader knowledge.
The findings could help improve AI architecture and even serve as biomarkers for neurological conditions.