{"id":12545,"date":"2026-04-22T15:19:51","date_gmt":"2026-04-22T15:19:51","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/12545\/"},"modified":"2026-04-22T15:19:51","modified_gmt":"2026-04-22T15:19:51","slug":"deliberating-on-the-many-definitions-of-artificial-general-intelligence","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/12545\/","title":{"rendered":"Deliberating On The Many Definitions Of Artificial General Intelligence"},"content":{"rendered":"<p><img decoding=\"async\" class=\" top-image\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/1776871191_721_0x0.jpg\" alt=\"Computer programmer explaining adhesive note while giving presentation to colleagues in office\" data-height=\"1357\" data-width=\"2034\" fetchpriority=\"high\" style=\"position:absolute;top:0\"\/><\/p>\n<p>Artificial general intelligence (AGI) does not yet have a universally accepted definition, but we need one ASAP.<\/p>\n<p>getty<\/p>\n<p>In today\u2019s column, I examine an unresolved controversy in the AI field that hasn\u2019t received the attention it rightfully deserves, namely, what constitutes a sensible and universally agreed-upon definition for pinnacle AI, commonly and vaguely referred to as artificial general intelligence (AGI).<\/p>\n<p>This is a vital matter. At some point, we should be ready to agree whether the advent of AGI has been reached. There is also the matter of gauging AI progress and whether we are getting closer to AGI or veering away from AGI. All told, if there isn\u2019t a wholly accepted universal definition, we will be constantly battling over whether pinnacle AI is in our sights and whether it has truly been attained. This is the classic dilemma of apples versus oranges. A person who defines apples as though they are oranges will be forever in a combative mode when trying to discuss whether someone is holding an apple in their hands. <\/p>\n<p>As Socrates once pointed out, the beginning of wisdom is the definition of terms. There needs to be a concerted effort to properly define what AGI means.<\/p>\n<p>Let\u2019s talk about it.<\/p>\n<p>This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see <a href=\"https:\/\/www.forbes.com\/sites\/lanceeliot\/\" data-ga-track=\"InternalLink:https:\/\/www.forbes.com\/sites\/lanceeliot\/\" target=\"_self\" aria-label=\"the link here\" rel=\"nofollow noopener\">the link here<\/a>). <\/p>\n<p>Heading Toward AGI And ASI<\/p>\n<p>First, some fundamentals are required to set the stage for this discussion.<\/p>\n<p>There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). <\/p>\n<p>Overall, the definition of AGI generally consists of aiming for AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many, if not all, feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at <a href=\"https:\/\/www.forbes.com\/sites\/lanceeliot\/2024\/12\/10\/sneaky-shiftiness-on-the-boundaries-between-ai-versus-agi-and-ultimately-ai-superintelligence\/\" data-ga-track=\"InternalLink:https:\/\/www.forbes.com\/sites\/lanceeliot\/2024\/12\/10\/sneaky-shiftiness-on-the-boundaries-between-ai-versus-agi-and-ultimately-ai-superintelligence\/\" target=\"_self\" aria-label=\"the link here\" rel=\"nofollow noopener\">the link here<\/a>.<\/p>\n<p>We have not yet attained the generally envisioned AGI. <\/p>\n<p>In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. <\/p>\n<p>Controversy About AGI As Terminology<\/p>\n<p>To the surprise of many in the media and the general public at large, there is no universally accepted standardized definition for what AGI consists of. <\/p>\n<p>This lack of an across-the-board formalized definition for AGI spurs numerous difficulties and problems. For example, AI gurus referring to AGI can be making unspoken assumptions about what they believe AGI to be, and therefore stoke confusion since they aren\u2019t referring to the same thing. Discussions can occur at cross purposes due to each respective expert having their own idiosyncratic definition of what AGI is or ought to be.<\/p>\n<p>An especially disquieting concern is that attaining AGI has become a preeminent directional focus for many in the AI industry, yet this is a bit of a mirage since the AI field does not have a said-to-be one-and-only-one North Star that represents what AGI is supposed to be:<\/p>\n<p>\u201cRecent advances in large language models (LLMs) have sparked interest in \u2018achieving human-level \u2018intelligence\u2019 as a \u2018north-star goal\u2019 of the AI field. This goal is often referred to as \u2018artificial general intelligence\u2019 (\u2018AGI\u2019).\u201d \u201cYet rather than helping the field converge around shared goals, AGI discourse has mired it in controversies.\u201d\u201cResearchers diverge on what AGI is and on assumptions about goals and risks. Researchers further contest the motivations, incentives, values, and scientific standing of claims about AGI.\u201dFinally, the building blocks of AGI as a concept &#8212; intelligence and generality &#8212; are contested in their own right.\u201d (source: Borhane et al, \u201cStop Treating \u2018AGI\u2019 as the North-Star Goal of AI Research.\u201d arXiv, February 7, 2025).The Moving Of The Cheese<\/p>\n<p>In a prior posting, I had noted that some AI luminaries have been opting to define AGI in a manner that suits their specific interests. I refer to this as moving the cheese (see my discussion at <a href=\"https:\/\/www.forbes.com\/sites\/lanceeliot\/2025\/02\/11\/sam-altman-moves-the-cheese-when-it-comes-to-attaining-agi\/\" data-ga-track=\"InternalLink:https:\/\/www.forbes.com\/sites\/lanceeliot\/2025\/02\/11\/sam-altman-moves-the-cheese-when-it-comes-to-attaining-agi\/\" target=\"_self\" aria-label=\"the link here\" rel=\"nofollow noopener\">the link here<\/a>). You might be familiar with the movable cheese metaphor &#8212; it became part of our cultural lexicon due to a book published in 1998 entitled \u201cWho Moved My Cheese? An Amazing Way To Deal With Change In Your Work And In Your Life\u201d. The book identified that we are all, at times, akin to mice seeking a morsel of cheese in a maze.<\/p>\n<p>OpenAI CEO Sam Altman is especially adept at loosely defining and then redefining AGI. In his personal blog posting entitled \u201cThree Observations\u201d of February 10, 2025, he provided a definition of AGI that said this: \u201cAGI is a weakly defined term, but generally speaking, we mean it to be a system that can tackle increasingly complex problems, at human level, in many fields.\u201d<\/p>\n<p>This AGI definition contains a plethora of ambiguity and came under fierce arguments about being shaped to accommodate OpenAI\u2019s AI products. For example, by indicating that AGI would be at a human level in \u201cmany fields\u201d, this seemed to be an immense watering down from the earlier concepts that AGI would be versed in all fields. It is a lot easier to devise pinnacle AI that would be merely accomplished in many fields, versus having to reach a much taller threshold of doing so in all fields.<\/p>\n<p>Still Messing Around<\/p>\n<p>According to a reported interview with Sam Altman that took place recently, Altman made these latest remarks about the AGI moniker:<\/p>\n<p>\u201cI think it\u2019s not a super useful term.\u201d\u201cI think the point of all of this is it doesn\u2019t really matter, and it\u2019s just this continuing exponential model capability that we\u2019ll rely on for more and more things.\u201d (source: \u201cSam Altman now says AGI is \u2018not a super useful term\u2019 \u2013 and he\u2019s not alone\u201d by Ryan Browne, CNBC, August 11, 2025).<\/p>\n<p>Once again, this type of chatter about the meaning of AGI has sparked renewed controversy. The remarks seem to try and create distance from the AGI definitions that he and others have touted in the last several years.<\/p>\n<p>Why so?<\/p>\n<p>It could be that part of the underlying basis for wanting to distance the AGI phraseology could be laid at the feet of the newly released GPT-5. Leading up to GPT-5, there had been tremendous uplifting of expectations that we were finally going to have AGI in our hands, ready for immediate use. By and large, though GPT-5 had some interesting advances, it wasn\u2019t even close to any kind of AGI, almost no matter how low a bar one might set for AGI, see my detailed analysis at <a href=\"https:\/\/www.forbes.com\/sites\/lanceeliot\/2025\/08\/07\/gpt-5-is-launched-but-set-aside-your-expectations-that-this-was-going-to-be-either-agi-or-artificial-superintelligence-since-it-clearly-isnt\/\" data-ga-track=\"InternalLink:https:\/\/www.forbes.com\/sites\/lanceeliot\/2025\/08\/07\/gpt-5-is-launched-but-set-aside-your-expectations-that-this-was-going-to-be-either-agi-or-artificial-superintelligence-since-it-clearly-isnt\/\" target=\"_self\" aria-label=\"the link here\" rel=\"nofollow noopener\">the link here<\/a>.<\/p>\n<p>Inspecting AGI Definitions<\/p>\n<p>Let\u2019s go ahead and look at a variety of AGI definitions that have been floating around and are considered as potentially viable or at least noteworthy ways to define AGI. I handily list these AGI definitions so that you can see them collected into one convenient place. Furthermore, it makes for handy analysis and comparison by having them at the front and center for inspection. <\/p>\n<p>Before launching into the AGI definitions, you might find it of keen interest that the AI field readily acknowledges that things are in a state of flux on the heady matter.  The Association for the Advancement of Artificial Intelligence (AAAI), considered a top-caliber AI non-profit academic professional association, recently convened a special panel to envision the future of AI, and they, too, acknowledged the confounding nature of what AGI might be. <\/p>\n<p>The AAAI futures report that was published in March 2025 made this pointed commentary about AGI (excerpts):<\/p>\n<p>\u201cAGI is not a formally defined concept, nor is there any agreed test for its achievement. \u201cSome researchers suggest that \u2018we\u2019ll know it when we see it\u2019 or that it will emerge naturally from the right set of principles and mechanisms for AI system design.\u201d\u201cIn discussions, AGI may be referred to as reaching a particular threshold on capabilities and generality. However, others argue that this is ill-defined and that intelligence is better characterized as existing within a continuous, multidimensional space.\u201dStrawman Definitions Of AGI<\/p>\n<p>Let\u2019s get started on the various AGI definitions by beginning with this strawman:<\/p>\n<p>\u201cAGI is a computer that is capable of solving human solvable problems, but not necessarily in human-like ways.\u201d (source: Morris et al, \u201cLevels of AGI: Operationalizing Progress on the Path to AGI.\u201d arXiv, November 4, 2023).<\/p>\n<p>Give the definition a contemplative moment. <\/p>\n<p>Here\u2019s one mindful facet. Is this AGI definition suggesting that unsolvable problems by humans are completely beyond the capability of AGI? If so, this would be of great dismay to many, since a vaunted basis for pursuing AGI is that the advent of AGI will presumably lead to cures for cancer and many other diseases (aspects that so far have not been solvable by humans). <\/p>\n<p>I trust you can see the challenges associated with devising a universally acceptable, ironclad AGI definition.<\/p>\n<p>In a now classic research paper on the so-called sparks of AGI, the authors provided this definition of AGI:<\/p>\n<p>\u201cWe use AGI to refer to systems that demonstrate broad capabilities of intelligence, including reasoning, planning, and the ability to learn from experience, and with these capabilities at or above human-level.\u201d (source: Bubeck et al, \u201cSparks of Artificial General Intelligence:  Early Experiments with GPT-4.\u201d  arXiv, March 22, 2023).<\/p>\n<p>This research paper became a widespread flashpoint both within and beyond the AI community due to claiming that present-day AI of 2023 was showcasing a hint or semblance of AGI. The researchers invoked parlance that AI at the time was revealing sparks of AGI. <\/p>\n<p>Critics and skeptics alike pointed out that the AGI definition was of such a broad and non-specific nature that nearly any AI system could be construed as being ostensibly AGI.<\/p>\n<p>More Definitions Of AGI<\/p>\n<p>In addition to AI researchers defining AGI, many others have done so, too. <\/p>\n<p>The Gartner Group, a longstanding practitioner-oriented think tank on computing in general, provided this definition of AGI in 2024:<\/p>\n<p>\u201cArtificial General Intelligence (AGI), also known as strong AI, is the (currently hypothetical) intelligence of a machine that can accomplish any intellectual task that a human can perform. AGI is a trait attributed to future autonomous AI systems that can achieve goals in a wide range of real or virtual environments at least as effectively as humans can\u201d (Gartner Group as quoted in Jaffri, A. \u201cExplore Beyond GenAI on the 2024 Hype Cycle for Artificial Intelligence.\u201d Gartner Group, November 11, 2024).<\/p>\n<p>This definition is indicative that some AGI definitions are short in length and others are lengthier, such that this example is a bit longer than the other two AGI definitions noted earlier. There is an espoused belief amongst some in the AI community that a sufficiently suitable AGI definition would have to be quite lengthy, doing so to encompass the essence of what AGI is and what AGI is not. <\/p>\n<p>Another noteworthy aspect of the Gartner Group definition of AGI is that the phrase \u201cstrong AI\u201d is mentioned in the definition. The initial impetus for the AGI moniker arose partially due to debates within the AI community about strong AI versus weak AI (see my explanation at <a href=\"https:\/\/www.forbes.com\/sites\/lanceeliot\/2025\/09\/05\/forcing-ai-to-shut-down-conversations-when-people-might-be-veering-into-ai-psychosis\/\" data-ga-track=\"InternalLink:https:\/\/www.forbes.com\/sites\/lanceeliot\/2025\/09\/05\/forcing-ai-to-shut-down-conversations-when-people-might-be-veering-into-ai-psychosis\/\" target=\"_self\" aria-label=\"the link here\" rel=\"nofollow noopener\">the link here<\/a>).<\/p>\n<p>Here is another example of a multi-sentence AGI definition:<\/p>\n<p>\u201cAn Artificial General Intelligence (AGI) system is a computer that is adaptive to the open environment with limited computational resources and that satisfies certain principles.  For AGI, problems are not predetermined and not specified ones; otherwise, there is most probably always a special system that performs better than any general system. I keep the part \u2018certain principles\u2019 to be blurry, waiting for future discussions and debates on it.\u201d (source: Xu, \u201cWhat is Meant by AGI? On The Definition of Artificial General Intelligence.\u201d arXiv, April 16, 2024).<\/p>\n<p>This definition reveals another facet of AGI definitions overall regarding the importance of defining all terms used within an AGI definition. In this instance, the researcher states that AGI must satisfy \u201ccertain principles\u201d. In this instance, the statement mentions that the informally noted \u201ccertain principles\u201d remain undefined. A lack of completeness leaves open a wide interpretation of any postulated AGI definition.<\/p>\n<p>Lots And Lots Of AGI Definitions<\/p>\n<p>Wikipedia has a definition for AGI:<\/p>\n<p>\u201cArtificial general intelligence (AGI) &#8212; sometimes called human\u2011level intelligence AI\u2014is a type of artificial intelligence capable of performing the full spectrum of cognitively demanding tasks with proficiency comparable to, or surpassing, that of humans\u201d (Wikipedia 2025).<\/p>\n<p>A notable element of this AGI definition and many others is whether AGI is intended to be on par with humans or exceed humans (\u201ccomparable to, or surpassing, that of humans\u201d). <\/p>\n<p>There is an ongoing debate in the AI community on this nuanced but crucial consideration. One viewpoint is that the coined artificial superintelligence or ASI encompasses AI that is beyond or above human capabilities, while AGI is solely intended to be AI that meets or is on par with human capabilities.<\/p>\n<p>IBM has provided a definition of AGI:<\/p>\n<p>\u201cArtificial general intelligence (AGI) is a hypothetical stage in the development of machine learning (ML) in which an artificial intelligence (AI) system can match or exceed the cognitive abilities of human beings across any task. It represents the fundamental, abstract goal of AI development: the artificial replication of human intelligence in a machine or software\u201d (IBM as quoted in Bergmann et al, \u201cWhat is artificial general intelligence (AGI)?\u201d IBM, September 17, 2024).<\/p>\n<p>An element of special interest in this AGI definition is the reference to machine learning (ML). There are AGI definitions that refer to subdisciplines within the AI field, such as referring to ML or other areas, such as robotics or autonomous systems. <\/p>\n<p>Should an AGI definition explicitly or firmly refer to AI practices or subdisciplines? <\/p>\n<p>The question is often asked since AGI then seemingly becomes tied to specific AI fields of study. The contention is that the definition of AGI should be fully standalone and not rely upon references to AI fields or subfields (which are subject to change, and otherwise seemingly unnecessary to strictly define AGI per se).<\/p>\n<p>OpenAI has also posted a definition of AGI, as contained within the official OpenAI Charter statement:<\/p>\n<p>\u201cAGI is defined as highly autonomous systems that outperform humans at most economically valuable work.\u201d<\/p>\n<p>This definition brings up an emerging trend associated with AGI definitions. The wording or a similar variation of \u201cat most economically valuable work\u201d is increasingly being used in the latest definitions of AGI. This appears to tie the capabilities of AGI to the notion of economically valuable work. <\/p>\n<p>Critics argue that this is a limiting factor that does not suitably belong in the definition of AGI and perhaps serves a desired purpose rather than acting to fully and openly define AGI.<\/p>\n<p>My Working Definition Of AGI<\/p>\n<p>The working definition of AGI that I have been using is this strawman that I composed when the AGI moniker was initially coming into vogue as a catchphrase:<\/p>\n<p>\u201cAGI is defined as an AI system that exhibits intelligent behavior of both a narrow and general manner on par with that of humans, in all respects\u201d (source: Eliot, \u201cFiguring out what artificial general intelligence consists of\u201d, Forbes, December 6, 2023).<\/p>\n<p>The reference to intelligent behavior in both a narrow and general manner is an acknowledgment that historically, AGI as a phrase partially arose to supersede the generation of AI that was viewed as being overly narrow and not of a general nature (such as expert systems, knowledge-based systems, rules-based systems). <\/p>\n<p>Another element is that AGI would be on par with the intelligent behavior of humans in all respects. Thus, not being superhuman, and instead, on the same intellectual level as humankind. And doing so in all respects, comprehensively and exhaustively so.<\/p>\n<p>Mindfully Asking What AGI Means <\/p>\n<p>When you see a banner headline proclaiming that AGI is here, or getting near, or maybe eons away, I hope that the first thought you have is to dig into the meaning of AGI as it is being employed in that media proclamation. <\/p>\n<p>Perhaps the declaration refers to apples rather than oranges or has a definition that is sneakily devised to tilt toward one vantage point over another. AGI has regrettably become a catchall. Some believe we should discard the AGI moniker and come up with a new name for pinnacle AI. Others assert that this might merely be a form of trickery to avoid owning up to the harsh fact that we have not yet attained AGI.<\/p>\n<p>For the time being, I would wager that the AGI moniker is going to stick around. It has gotten enough traction that even though it is loosey-goosey, it does have a certain amount of popularized name recognition. If AGI as a designation is going to have long legs, it would be significant to reach a thoughtful agreement on a universally accepted definition.<\/p>\n<p>The famous English novelist Samuel Butler made this pointed remark: \u201cA definition is the enclosing of a wilderness of ideas within a wall of words.\u201d Do you part to help enclose a wilderness of ideas about pinnacle AI into a neatly packed and fully sensible set of words. <\/p>\n<p>Fame and possibly fortune await.<\/p>\n","protected":false},"excerpt":{"rendered":"Artificial general intelligence (AGI) does not yet have a universally accepted definition, but we need one ASAP. getty&hellip;\n","protected":false},"author":2,"featured_media":12546,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[6744,9998,3013,8968,111,9334,9996,9999,9299,9997],"class_list":{"0":"post-12545","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-agi","8":"tag-agi","9":"tag-anthropic-claude-google-gemini-meta-llama-xai-grok","10":"tag-artificial-general-intelligence","11":"tag-artificial-general-intelligence-agi","12":"tag-artificial-intelligence-ai","13":"tag-artificial-superintelligence-asi","14":"tag-definitions-meaning-terminology-moniker-naming-standard","15":"tag-future-progress-predictions","16":"tag-generative-ai-large-language-model-llm","17":"tag-openai-chatgpt-gpt-5-gpt-4o-sam-altman"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/12545","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=12545"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/12545\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/12546"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=12545"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=12545"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=12545"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}