H16 News
×
Logo

Stories

Topics
Polls
Our Team
Settings
Feedback
Login

By Swaleha | Published on May 20, 2025

Image Not Found
Technology / May 20, 2025

AI chatbots mimic language disorder seen in humans, study finds

Researchers found that AI chatbots like ChatGPT share internal signal patterns with brains affected by Wernicke’s aphasia. Both produce fluent but often incoherent language due to rigid processing dynamics.

Scientists used energy landscape analysis to compare the information processing of LLMs with brain scans of people with various aphasia conditions. It was found that both ways of making speech have a narrow pattern that can create a smooth but puzzling sound. The similarities might be used to enhance both AI technology and how diseases are diagnosed by medical staff.

ChatGPT, Llama, and other AI chatbots may work more like the human brain than expected, especially when it comes to producing language. A study at the University of Tokyo revealed that the language generated by large language models (LLMs) is similar to the language used by people with Wernicke’s aphasia.

Similar patterns, different origins

The researchers used energy landscape analysis, a tool from physics, to observe how signals are exchanged within both brains and AI systems. They examined the frequency of state switches and looked at the duration of each state. In both LLMs and patients with receptive aphasia, the results showed clear differences, meaning their processing was inflexible.

This research published in Advance Science shows that patients with Wernicke’s aphasia speak clearly, yet their words usually do not make any sense. The research revealed that LLMs also act in a similar fashion: their answers are clear, but they may contain false facts. Realising the similarity, the researchers wanted to find out if the inner workings of both were similar as well.

Implications for AI and medicine

This discovery affects things in two ways. It could be used in medicine to identify and monitor aphasia by looking at brain activity. This means that engineers could enhance chatbot performance by working on the inflexible internal signals used in AI.

The researchers advise against making too many direct comparisons. “We’re not claiming that chatbots suffer from brain damage,” Professor Watanabe explained. Even so, the similarities in signal behaviour might help improve both AI and healthcare.

Read More:

We need to be kinder to delivery boys

logo

HSRNEWS

Instant News. Infinite Insights

© gokakica.in. All Rights Reserved. Designed by Image Computer Academy