Professor Yann LeCun, often called one of the ‘godfathers of AI’, believes that artificial general intelligence (AGI) is the most overrated idea in AI. Recently at Davos, AI researchers offered a rather sobering message that should alarm all – the AI systems powering some of today’s biggest breakthroughs are fundamentally flawed. Most importantly, the industry’s rush towards ‘agentic AI’ is a recipe for disaster. 

The pioneering computer scientist did not mince words about the limitations of current large language models like ChatGPT and their kind. His key claim challenges the entire pathway of the AI industry.“We’re not going to attain human-level intelligence by simply making these systems bigger or better.”

“We’re not going to get to human-level intelligence or superintelligence by scaling up or refining the current paradigm,” LeCun stated. “There is a need for a paradigm change.”

His most alarming criticism targets the industry’s ongoing obsession with ‘agentic systems’, the AI assistants designed to take actions and accomplish tasks autonomously. He argues that with this the problem is that these systems are being built on large language models (LLMs) that lack a crucial capability to predict the consequences of their actions.

“How can a system possibly plan a sequence of actions if it can’t predict the consequences of its actions?” LeCun asked. “If you want intelligent behaviour, you need a system to anticipate what’s going to happen in the world and predict the consequences of its actions.”

He demonstrated this gap with a striking example. “The first time you ask a 10-year-old to solve a simple task, they will do it without necessarily being trained. Within the first 10 hours a 17-year-old drives a car; they can drive. We had millions of hours of training data to train autonomous cars, and we still don’t have level-five autonomous driving. That tells you the basic architecture is not there.”

Understanding the real world

According to the former chief AI scientist at Meta, the fundamental problem is that language models operate in a simplified universe. “The real world is way more complicated than the world of language,” he explained. While we may think of language as the pinnacle of human intelligence, LeCun insists that “predicting the next word in text is not that complicated.”

Story continues below this ad

Real intelligence, he argued, requires understanding the physical world. “Sensory data is high-dimensional, continuous, and noisy, and generative architectures do not work with this kind of data. The type of architecture we use for LLM generative AI does not apply to the real world.”

Apart from technical limitations, LeCun spoke about what he considers the most pressing danger, the consolidation of AI control among a handful of companies. “Capture and centralised control of AI is the biggest danger,” he warned, “because it will mediate all of our information diet.”

LeCun is unaffected by the notions of killer robots or AI takeover; instead, his worries are about a future where our entire digital diet will be mediated by AI systems from just a handful of proprietary companies on the West Coast of the US or in China.

“We’re in big trouble for the health of democracy, cultural diversity, linguistic diversity, and value systems,” LeCun said. “We need a highly diverse population of AI assistants for the same reason we need diversity in the press, and that can only happen with open source.”

Story continues below this ad

Industry’s approach to AI research

The AI pioneer also expressed his dismay at the industry’s shift away from open research. “The biggest factor in progress was not any particular contribution, it’s the fact that AI research was open,” he shared, describing how researchers would publish papers, share code, and accelerate collective progress.

“What’s been happening the last few years, to my despair, is that increasingly more industry research labs have been closing up,” he said, pointing to OpenAI and Anthropic as “never open, in fact very closed.” Even formerly open organisations like Google’s and Meta’s FAIR have become more restrictive.

Meanwhile, “the best open-source models at the moment come from China, they’re really good. Everybody in the research community is using Chinese models.” This shift, LeCun argues, is slowing Western progress at a critical moment.