Using AI tools such as ChatGPT may hurt your long-term ability to think, learn and remember, a study has suggested.
Academics at the MIT Media Lab, a research laboratory at the Massachusetts Institute of Technology, tracked students who relied on large language models (LLMs) to help to write essays.
These students showed reduced brain activity, poorer memory and weaker engagement than those who wrote essays using other methods, the study found.
The researchers used electroencephalogram scans (EEGs), which measure electrical activity in the brain, to monitor 54 students in three groups over multiple essay-writing sessions: one that used ChatGPT, one that used Google, and one that relied on no external help.
The researchers at the MIT found that teachers were also able to detect AI-generated essays
ADAM GLANZMAN/BLOOMBERG VIA GETTY
In the paper, titled “Your brain on ChatGPT”, the researchers concluded: “We demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study. The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months, the LLM group’s participants performed worse than their counterparts in the brain-only group at all levels: neural, linguistic, scoring.”
The EEG showed that the more help students had from AI, the less their brains worked. People using ChatGPT struggled to remember or quote from the essays they had just written and reported feeling little ownership of their work. By comparison, the essays written by the “brain-only” writers were more original and their brains more active, according to the study — which has not yet been peer reviewed.
When people went from using ChatGPT to writing without it, their brains were still less active. But those who went from brain-only to AI showed an increase — probably because they were trying to combine what they already knew with the new tool.
• Encyclopaedia Britannica is back and ‘it’s better than ChatGPT’
The MIT researchers added: “The LLM undeniably reduced the friction involved in answering participants’ questions compared to the search engine. However, this convenience came at a cognitive cost, diminishing users’ inclination to critically evaluate the LLM’s output or ‘opinions’ (probabilistic answers based on the training datasets).
“This highlights a concerning evolution of the ‘echo chamber’ effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as ‘top’ is ultimately influenced by the priorities of the LLM’s shareholders.”
Teachers were also able to detect the AI-generated essays as they recognised “the conventional structure and homogeneity” of the content.
A separate study published in Nature Human Behaviour last month found that chatbots produce a more limited range of ideas than a group of humans.
• The smarter AI gets, the more stuff it makes up
Henry Shevlin, associate director of Leverhulme Centre for the Future of Intelligence, at Cambridge University, said that “there’s no suggestion here that LLM usage caused lasting ‘brain damage’ or equivalent” but it “suggests that AI could potentially foster a kind of ‘laziness’, at least within specific tasks and workflows”.
Professor Henry Shevlin says LLMs such as ChatGPT should not be used as a “crutch”
SLAVKO MIDZOR/ALAMY
He added: “I’d really caution about drawing sweeping conclusions about the benefits or risks of AI in education from this study. Current data is conflicted, ranging from the very good to fairly negative, but it seems like it really matters how AI is used.”
Shevlin cited another much larger study of LLMs in Nigerian schools, which found large improvements to students’ learning, when used as “a tutor, not a crutch”.
“This reflects my own experience — it’s one thing to use AI as a personalised foreign language tutor, for example, and another to get it to do your homework,” he said.