AGI offers only a hazy, overzealous goal. Enter SAI, a sensible, purpose-driven North Star

Do you prefer hype or rationality? Artificial general intelligence offers only a hazy, overzealous goal. Enter SAI, a sensible, purpose-driven North Star.

Getty Images

On a recent episode of the Dr. Data Show, my co-host Luba Glouhova and I tackled a new paper authored by AI luminary Yann LeCun alongside other researchers. We had been tipped off by another co-author of the paper, AI researcher Philippe Wyder, who reached out on social media to say the paper related to our prior episode since it “shows a path out of our muddled AGI discourse by embracing specialization.”

The paper proposes a much-needed pivot for the industry: Because the ever-popular notion of artificial general intelligence offers only a hazy, overzealous goal, these researchers have proposed a new North Star they call “superhuman adaptable intelligence.”

Defining Superhuman Adaptable Intelligence

What is SAI? The authors’ definition breaks down into two parts. First, SAI is capable of adapting to exceed humans at any task humans can do – but crucially, it espouses tackling one task at a time. Second, SAI can also adapt to useful tasks outside the human domain.

SAI establishes a new goal for the industry. Instead of trying to build a singular, all-capable solution that can do everything a human can do, which is the goal represented by AGI, this framework champions a return to specialized, narrow AI. These researchers advocate leveraging massive amounts of data through self-supervised learning, as for example LLMs do, but they also advise that such tech then be adapted to solve specific problems.

As Luba pointed out during our podcast discussion, one of the most consequential insights highlighted by the authors is the recognition that human intelligence itself is not terribly general. I had a similar reckoning back in 1991 right before I began my Ph.D. at Columbia; I realized that human skills are quite particular and specialized, shaped by millions of years of evolutionary adaptation within a specific environment. Rather than attempting to fully replicate the arcane, unique distribution of capabilities that are specific to humans – a tall order to say the least – the paper advocates to instead focus: “The AI that folds our proteins should not be the AI that folds our laundry.” Even as systems scale with computational power, they still benefit immensely from specializing for the specific task at hand. Directing limited resources toward an individual, valuable goal is more productive than trying to build a machine that is “general” in the sense of being good at everything – or even good at the more limited yet vast range of things humans happen to be capable of.

Out Of The Frying Pan?

But does coining a new buzzword really hold the potential to cure the AI industry’s chronic hype problem, namely the disputable promise that we’re rapidly headed toward AGI? I have reservations. By using the words “superhuman” and “intelligence,” the term still flirts with the very sci-fi, tipping-point singularity hype that has haunted the notion of “AI” since it was conceived of in the 1950s. It allows industry leaders like LeCun to remain intellectually sound and avoid AGI’s intrinsic incoherence, while simultaneously keeping enough buzzword appeal to secure massive VC funding for new startups. Indeed, his new startup just raised a $1 billion seed.

As Luba and I joked on the podcast, SAI might just be a new flavor of Kool-Aid for the masses – but at least it’s a healthier, organic, sugar-free Kool-Aid. The true gem in the acronym is the word “adaptable,” which centers projects on a grounded, measurable metric: how quickly a system can adapt to become good at a particular task.

I say this paper is a net positive. It represents a desperately-needed sober look at our field’s often unrealistic goals. It aims to steer the industry away from the misleading narrative of autonomous, human-level AI agents, and back toward technologies that provide concrete, feasible value. The philosophy presented aligns well with what I’ve been espousing regarding the need for hybrid AI. To make genAI products viable for full deployment that captures scalable value, we must treat each initiative as a specialized endeavor, applying a targeted reliability layer to more narrowly form-fit the specific problem at hand.

If you would like to dive deeper into taming LLMs for practical use, be sure to check out the HYBRID AI 2026 conference taking place in San Francisco this May, where Luba and I will be speaking. You can access an overview of the event and a description of each enterprise presentation here. Disclosure: As the founding program chair, I am a partial owner of the Machine Learning Week conference series – which includes HYBRID AI 2026 – and I receive an honorarium for chairing.