Sequoia Capital now defines AGI vaguely: “the ability to figure things out.”

Sequoia Capital now defines AGI vaguely: “the ability to figure things out.” Such slippery hand-waving urges a pivot from AGI hype to hybrid AI’s feasible value.

Future Publishing via Getty Images

Recently on The Dr. Data Show, my co-host Luba Gluhova and I dug into the evolving discourse surrounding artificial general intelligence – and its stubborn incoherence. A recent publication by the venture capital firm Sequoia Capital projected the arrival of AGI by 2026, defining the concept simply as “the ability to figure things out.”

From an engineering and practical standpoint, this definition effectively only reiterates traditional, highly subjective definitions of AI. It is basically another way to say “capable of reasoning,” which has long been a common, if circular, attempt to define AI.

AGI was meant to differentiate from AI. It was originally supposed to signify the “whole enchilada” – a virtual human capable of doing anything that a person can. This would make such a system fully autonomous, capable of matching human performance across a wide range of tasks – effectively operating as a virtual employee.

However, as the profound technical challenges of achieving supreme autonomy become apparent, definitions of AGI shift toward more subjective criteria, blurring the distinction between AGI and AI, which itself has always faced the existential problem of being undefinable. This shift appears to be an exercise in waving hands in order to dodge criticism.

AI hype often emphasizes supreme autonomy – and most of the hype at least implies it. The notion of AGI is a natural extension of that aspect of the hype: If a system can do everything a person can, it needs no human in this loop.

While genAI possesses remarkable capabilities and offers substantial commercial value, it currently faces a critical reliability challenge. In automated enterprise workflows, such as customer service interactions or healthcare claims processing, a minor error rate – even as low as 5% or less – can render a fully autonomous system non-viable due to the operational risks of factual inaccuracies, ethical missteps or mishandled transactions.

Hybrid AI: A Practical Antidote To The Hype

So, how do we sober up and pursue feasible deployments that realize the potential value of these technologies? The answer is hybrid AI.

Hybrid AI offers a practical alternative to pursuing the ever-elusive ideal AGI. GenAI hallucinates and exhibits other unacceptable behaviors that preclude its deployment – especially for its more ambitious intended uses, such as performing the role of customer service agent, analyst, educator or all-capable virtual assistant. Rather than supreme autonomy, a feasible route to leveraging genAI and pursuing its more ambitious uses is to hybridize it with predictive AI.

Here’s how hybrid AI works: Machine learning models serve as a vital reliability layer. By assigning probability-based risk scores to the outputs of generative models, predictive AI can systematically identify the specific cases with the highest likelihood of failure or behavioral error. These high-risk cases are then routed to human operators for review. This judicious inclusion of a “human-in-the-loop” mitigates the operational risks associated with large language models while successfully automating a significant portion of the workload.

An Industrywide Pivot To Hybrid AI

Hybrid AI is already moving from theory to practice. A diverse array of industry leaders – including Netflix, Amazon, JPMorgan and Microsoft – are actively deploying these hybrid systems (they’ve lined up to speak on the topic at the conference I chair, HYBRID AI 2026). This empowers businesses to navigate the limitations of genAI and deploy reliable, valuable semi-autonomy.

The move to hybrid AI represents a sobering up, an evolution from unrealistic elation about supreme autonomy. The intoxication hinges on a misbelief, a hope and a prayer, that LLMs will somehow evolve into “virtual humans.”

Instead, we need to take the pressure off these models – dispense with the unrealistic performance expectations – and judiciously leverage what they feasibly can do by pairing them with the predictive safeguards they desperately need in order to achieve launch-worthiness.

You can access an overview of HYBRID AI 2026 (May 5 – 6 in San Francisco) and a description of each enterprise presentation here. Disclosure: As the founding program chair, I am a partial owner of the Machine Learning Week conference series – which includes HYBRID AI 2026 – and I receive an honorarium for chairing.