{"id":11357,"date":"2026-04-22T00:33:36","date_gmt":"2026-04-22T00:33:36","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/11357\/"},"modified":"2026-04-22T00:33:36","modified_gmt":"2026-04-22T00:33:36","slug":"the-case-against-declaring-artificial-general-intelligence","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/11357\/","title":{"rendered":"The Case Against Declaring Artificial General Intelligence"},"content":{"rendered":"<p>\u201cAGI is here, now.\u201d<\/p>\n<p>That\u2019s Sequoia Capital this week, one of Silicon Valley\u2019s most legendary venture firms and a major OpenAI investor, declaring we\u2019ve crossed the threshold into artificial general intelligence.<\/p>\n<p>Their <a href=\"https:\/\/sequoiacap.com\/article\/2026-this-is-agi\/\" rel=\"nofollow noopener\" target=\"_blank\">post also proclaims<\/a>, in big bold letters, that they are \u201cBlissfully Unencumbered by the Details\u201d. When Sequoia speaks, the tech world listens. This post has dominated conversations across the AI builder community for days.<\/p>\n<p>As a builder, venture investor, and AI scientist myself, I find their proclamation both deeply useful and profoundly dangerous.<\/p>\n<p>Here\u2019s what\u2019s useful<\/p>\n<p>Sequoia offers a functional definition of AGI: \u201cthe ability to figure things out. That\u2019s it\u201d. AI can now crawl through information, determine a path forward, and execute. The key shift, as they frame it, is AI moving from \u201ctalking\u201d to \u201cdoing\u201d. Harvey and Legora \u201cfunction as associates,\u201d Juicebox \u201cfunctions as a recruiter,\u201d OpenEvidence\u2019s Deep Consult \u201cfunctions as a specialist.\u201d Those are their exact phrases, and I\u2019m skeptical of the framing, but we\u2019ll get to that.<\/p>\n<p>They\u2019re throwing down the gauntlet for builders, and this matters. AI systems can actually redline contracts now, genuinely reach out to prospects today. It\u2019s a reminder to think bigger about what\u2019s possible and that the horizon has expanded dramatically in just the last year.<\/p>\n<p>I sent this post to my own co-founders, not to debate the philosophy, but I wanted to push us on Sequoia\u2019s \u201cdoing\u201d versus \u201ctalking\u201d framework. We need to rise to this challenge ourselves.<\/p>\n<p>However, calling these systems \u201cAGI\u201d is profoundly damaging, both to the credibility of the AI revolution and to its safe deployment. It obscures what AI agents can actually do today (hint: not general superintelligence) and offers zero guidance on how humans should interact with them (hint: don\u2019t trust them blindly).<\/p>\n<p>Let me give three examples that expose these systems\u2019 real limitations.<\/p>\n<p>AI systems cannot handle out-of-distribution situations<\/p>\n<p>I wrote about this <a href=\"https:\/\/www.cityam.com\/im-an-ai-expert-so-how-did-i-get-gaslit-by-chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">in my last column<\/a>, but the Greenland crisis offers a live, unfolding example. I tested how GenAI tools, such as ChatGPT 5.2 with maximum \u201cthinking and research\u201d mode, would analyze the situation. Could these supposedly AGI systems help me understand a fast-moving geopolitical event?<\/p>\n<p>They couldn\u2019t even conceive of it happening.<\/p>\n<p>I shared a screenshot of the Wikipedia page documenting the crisis. Every model told me it was fabricated, \u201cbullshit\u201d, impossible. When I pushed further, citing real news sources, ChatGPT kept telling me to \u201crelax\u201d \u2013 insisting \u201cthis is not a real crisis\u201d.\u00a0 The models are so anchored in the traditional norms of the Western Alliance, the system simply cannot output context that contradicts its training data, even when confronted with primary sources.<\/p>\n<p>AI \u201cthinking\u201d fails when the situation sits outside its training distribution: gaslighting users instead of escalating uncertainty, refuses humility and keeps \u201creasoning\u201d even when it\u2019s wrong. If policymakers or politicians are using these tools to understand Greenland right now, that\u2019s genuinely dangerous.\u00a0<\/p>\n<p>AI systems reflect the beliefs of their builders<\/p>\n<p>A <a href=\"https:\/\/www.nature.com\/articles\/s44387-025-00048-0\" rel=\"nofollow noopener\" target=\"_blank\">Nature article published two weeks ago<\/a> showed exactly this. Researchers found that LLMs reflected the ideology of their creators. Mainland Chinese models were extremely favorable toward the PRC; Western models were deeply unfavorable.<\/p>\n<p>Even within Western LLMs, the bias is stark. Grok (Elon Musk\u2019s XAI model) showed negative bias toward the EU and multiculturalism, reflecting a right-wing agenda. Google\u2019s Gemini, perceived as more liberal, was more favorable toward both.<\/p>\n<p>The AI community now accepts this as fact: LLMs mirror the ideology of their labs. So how can we trust an AI agent has a \u201cblank slate\u201d to \u201cfigure something out\u201d? Especially when it must trawl through vast, nuanced data?<\/p>\n<p>Claims of AGI assume neutrality; or at least it should. The evidence says otherwise.<\/p>\n<p>Deterministic versus non-deterministic systems<\/p>\n<p>GenAI is deeply non-deterministic. The same prompt can produce slightly different outputs or vastly different ones.<\/p>\n<p>Humans intuitively understand what should be fixed versus creative. My shirt size when ordering online? Deterministic. The pattern I choose today? Creative preference. Even the latest, most powerful models blur this line constantly. You\u2019ve seen it: GenAI treating deterministic facts as speculative, creative inputs.\u00a0<\/p>\n<p>This reveals a critical gap in meta-cognition \u2013 awareness of the thinking process itself. Without distinguishing what\u2019s fixed from what\u2019s generative, AI cannot reliably \u201cfigure things out\u201d.<\/p>\n<p>So what do we do?<\/p>\n<p>We have tools at our disposal. First, choose narrower use cases where bias and out-of-distribution events are less likely. Second, ensure AI has full, personalised context anchored in reality \u2013 not agents running wild without grounding. <a href=\"https:\/\/www.cityam.com\/context-is-king-when-it-comes-to-ai\/\" rel=\"nofollow noopener\" target=\"_blank\">As I\u2019ve written before<\/a>, context is king when it comes to AI agents. This also clarifies what should be deterministic versus generative. Third, deploy rules-based filters and observer agents that trigger human review when needed.<\/p>\n<p>Finally, accept a fundamental truth: LLMs will always reflect their training data and their creators\u2019 ideology.\u00a0 LLMs (and their creators) are political actors, whether they want to be or not. As such, AI should be controlled by individual human users, not a system done to humans. Provenance matters \u2013 tracing every decision back to a human, no matter how many steps removed, is essential for governance and safety.<\/p>\n<p>At the end of the day, I don\u2019t care what we call it \u2013 just not AGI. What we have right now is generationally powerful: AI that can talk and act efficiently within narrow, well-defined frameworks. Guided by critical guardrails, deterministic filters, and human-in-the-loop processes, these systems will add trillions of dollars of value to our global economy.<\/p>\n<p>Call it Artificial Narrow Intelligence. That\u2019s the trillion-dollar opportunity available today.<\/p>\n<p>By CityAM<\/p>\n<p>More Top Reads From Oilprice.com<\/p>\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"\u201cAGI is here, now.\u201d That\u2019s Sequoia Capital this week, one of Silicon Valley\u2019s most legendary venture firms and&hellip;\n","protected":false},"author":2,"featured_media":11358,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[6744,405,4989,3013,25,9208,1897,2225,9209,6020],"class_list":{"0":"post-11357","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-agi","8":"tag-agi","9":"tag-ai-agents","10":"tag-ai-safety","11":"tag-artificial-general-intelligence","12":"tag-artificial-intelligence","13":"tag-bias-in-ai","14":"tag-geopolitics","15":"tag-llms","16":"tag-narrow-intelligence","17":"tag-sequoia-capital"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/11357","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=11357"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/11357\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/11358"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=11357"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=11357"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=11357"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}