{"id":273159,"date":"2026-01-08T00:22:07","date_gmt":"2026-01-08T00:22:07","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/273159\/"},"modified":"2026-01-08T00:22:07","modified_gmt":"2026-01-08T00:22:07","slug":"will-2026-be-the-year-that-the-ai-industry-stops-crowing-about-agi","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/273159\/","title":{"rendered":"Will 2026 Be the Year That the AI Industry Stops Crowing About &#8216;AGI&#8217;?"},"content":{"rendered":"<p>Up until now, the stated end goal for companies developing artificial intelligence (AI) products has almost universally been to achieve artificial general intelligence (AGI)\u2014an ill-defined ambition that can best be summarized as a hypothetical AI that\u2019s capable of matching and surpassing the cognitive abilities of humans. But now that we\u2019ve basically bet the entire economy on hitting that benchmark and earmarked literally trillions of dollars to resource-sucking data centers with the express intent of providing the processing power needed to build the god-machine, the industry is suddenly, collectively backing off the promise.<\/p>\n<p>This (frankly, pretty predictable) turn started last year. Back in August, OpenAI CEO Sam Altman <a href=\"https:\/\/www.cnbc.com\/2025\/08\/11\/sam-altman-says-agi-is-a-pointless-term-experts-agree.html\" rel=\"nofollow noopener\" target=\"_blank\">said<\/a> AGI is \u201cnot a super useful term,\u201d which also seemed to play a role in his company pivoting to talk about <a href=\"https:\/\/gizmodo.com\/chatgpt-achieves-a-new-level-of-intelligence-not-using-the-em-dash-2000686253\" rel=\"nofollow noopener\" target=\"_blank\">developing an AI capable of autonomous research<\/a> rather than mentioning AGI. That was noteworthy given that OpenAI was technically the only company with a formal definition of AGI: an AI system that can generate at least $100 billion in profits, <a href=\"https:\/\/gizmodo.com\/leaked-documents-show-openai-has-a-very-clear-definition-of-agi-2000543339\" rel=\"nofollow noopener\" target=\"_blank\">per leaked internal documents<\/a>.<\/p>\n<p>Others around the industry have also started to throw cold water on the AGI concept. Salesforce CEO Marc Benioff, a guy so obsessed with AI that he\u2019s considered <a href=\"https:\/\/gizmodo.com\/salesforce-ceo-mulls-changing-name-to-ai-flavored-agentforce-2000696360\" rel=\"nofollow noopener\" target=\"_blank\">changing the name of his company<\/a> to reflect his undying affinity for the technology, <a href=\"https:\/\/gizmodo.com\/marc-benioff-cant-get-enough-of-the-ai-hype-unless-you-say-agi-2000649999\" rel=\"nofollow noopener\" target=\"_blank\">described AGI as marketing \u201chypnosis\u201d<\/a> and said he\u2019s \u201cextremely suspect\u201d of anyone who hypes it. Dario Amodei, CEO of Anthropic, said he\u2019s \u201c<a href=\"https:\/\/www.youtube.com\/watch?v=7LNyUbii0zw\" rel=\"nofollow noopener\" target=\"_blank\">always disliked<\/a>\u201d the term AGI. Just recently, Anthropic President Daniela Amodei <a href=\"https:\/\/www.businessinsider.com\/anthropic-president-idea-of-agi-may-already-be-outdated-2026-1\" rel=\"nofollow noopener\" target=\"_blank\">said<\/a> AGI is an \u201coutdated\u201d term. Microsoft CEO Satya Nadella has gone so far as to <a href=\"https:\/\/www.microsoft.com\/en-us\/investor\/events\/fy-2026\/earnings-fy-2026-q1\" rel=\"nofollow noopener\" target=\"_blank\">say<\/a> he doesn\u2019t think \u201cAGI as defined, at least by us in our contract, is ever going to be achieved anytime soon,\u201d and said any self-declared AGI achievement is just \u201cbenchmark hacking,\u201d which is funny considering Microsoft was <a href=\"https:\/\/techcrunch.com\/2024\/12\/26\/microsoft-and-openai-have-a-financial-definition-of-agi-report\/\" rel=\"nofollow noopener\" target=\"_blank\">integral to crafting OpenAI\u2019s money-generating definition<\/a> of AGI.<\/p>\n<p>Some of the posturing from the industry on this whole shift away from AGI is being positioned as researchers simply having even loftier goals in mind\u2014as if AGI is too limiting to truly describe what AI at its maximum capacity is capable of. But there\u2019s a simpler explanation for this shift in language used to describe the AI end goal: large language models, the technology that most major AI companies have poured endless amounts of money and data into in order to achieve some form of general intelligence, simply are not capable of actually reaching that benchmark.<\/p>\n<p>That\u2019s the conclusion that critics of the AI industry have had for some time now. People like Gary Marcus, a noted AI skeptic, have <a href=\"https:\/\/garymarcus.substack.com\/p\/breaking-openais-efforts-at-pure\" rel=\"nofollow noopener\" target=\"_blank\">said<\/a>, \u201cpure scaling will not get us to AGI.\u201d Similar conclusions can be found in recent research, including a <a href=\"https:\/\/machinelearning.apple.com\/research\/illusion-of-thinking\" rel=\"nofollow noopener\" target=\"_blank\">paper from Apple<\/a> that concludes LLMs are likely not capable of achieving AGI and a study from the Data Mining and Machine Learning Lab that <a href=\"https:\/\/arxiv.org\/pdf\/2508.01191\" rel=\"nofollow noopener\" target=\"_blank\">concludes<\/a> \u201cchain of thought reasoning\u201d in LLMs is \u201ca mirage.\u201d That suggests AGI isn\u2019t just a bad metric because it\u2019s hard to define; it\u2019s likely not one that\u2019s achievable with these dumb bots.<\/p>\n","protected":false},"excerpt":{"rendered":"Up until now, the stated end goal for companies developing artificial intelligence (AI) products has almost universally been&hellip;\n","protected":false},"author":2,"featured_media":273160,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[261],"tags":[1645,291,6006,17211,289,290,18,19,17,307,82],"class_list":{"0":"post-273159","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-agi","9":"tag-ai","10":"tag-anthropic","11":"tag-artificial-general-intelligence","12":"tag-artificial-intelligence","13":"tag-artificialintelligence","14":"tag-eire","15":"tag-ie","16":"tag-ireland","17":"tag-openai","18":"tag-technology"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@ie\/115856627644923248","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/273159","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=273159"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/273159\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/273160"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=273159"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=273159"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=273159"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}