Artificial general intelligence, or AGI, refers to an AI capable of matching human intelligence. For many researchers, it is the ultimate goal. Major AI companies such as OpenAI have made it clear that reaching this level is their mission, with some claiming we could get there within a decade – or even sooner. While most experts are betting on large language models, others argue that this architecture is too limited. Yann LeCun, for instance, left Meta to focus on so called ‘world models’, which he considers more promising.
But what if everyone is wrong?
In their Nature essay, philosopher Eddy Keming Chen and his colleagues at the University of California – specialists in linguistics, computer science, and data science – argue that AGI may already be here. Today’s chatbots can pass the Turing test. Designed in 1950 by mathematician Alan Turing, the test places a human in written conversation with both another human and a machine, asking them to determine which is which. ChatGPT is now judged to be human more often than actual humans. A few years ago, that achievement alone might have been enough to label a system as AGI.
Redefining intelligence: AGI vs superintelligence
One of the central issues is that there is no universally accepted definition of AGI. The authors suggest that artificial intelligence should be understood as part of a broader category that includes humans. If that is the case, then intelligence does not require perfection or total mastery. No human can perform every task at expert level across all domains.
They distinguish between AGI and superintelligence. In their view, AGI describes systems such as large language models that already demonstrate expert level performance in many fields. Superintelligence, by contrast, would vastly outperform humans in every domain.
The researchers also address ten common objections. One of the most popular criticisms is that these systems are merely ‘stochastic parrots’, repeating patterns from their training data. Yet their ability to solve new mathematical problems and transfer skills across domains challenges that claim. Expecting revolutionary scientific breakthroughs, they argue, may be an unreasonable standard.
Another objection is that language models lack a physical representation of the world. However, their capacity to predict the consequences of actions or to reason through physics problems suggests otherwise. AI systems are no longer limited to text – they now process images, audio, and video. While they do not have bodies, the authors contend that intelligence does not require one. With rapid advances in robotics, the era of ‘physical AI’ has already begun.
They also reject the idea that autonomy or a persistent autobiographical memory is essential to general intelligence. It is true that AI systems require vast amounts of data and cannot, for instance, learn to drive with only twenty hours behind the wheel. But, they argue, what matters is the final performance – not how long it takes to get there.
And what about hallucinations?
The issue of AI hallucinations is addressed, though perhaps somewhat briefly. According to the authors, hallucination rates have declined in the latest model generations. Humans, after all, are also prone to cognitive biases and false memories. Still, hallucinations remain common, and some studies suggest they may even be increasing. OpenAI itself has acknowledged that even with GPT 5, roughly one in ten responses may contain a hallucination.
In the end, the authors maintain that we have already achieved AGI. If we fail to recognise it, they argue, it may be because our understanding of intelligence is too narrow. Our anthropocentric bias prevents us from seeing the emergence of a new form of intelligence – one that is profoundly different from our own.
That perspective may also explain why figures such as Mark Zuckerberg increasingly prefer to speak of superintelligence instead. Perhaps the debate is no longer about whether AGI has arrived, but about what we choose to call the intelligence now unfolding before us.