{"id":10917,"date":"2026-04-21T19:20:18","date_gmt":"2026-04-21T19:20:18","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/10917\/"},"modified":"2026-04-21T19:20:18","modified_gmt":"2026-04-21T19:20:18","slug":"what-if-general-artificial-intelligence-was-already-here-without-us-really-understanding-it","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/10917\/","title":{"rendered":"What if general artificial intelligence was already here\u2026 without us really understanding it?"},"content":{"rendered":"<p><a href=\"https:\/\/plato.stanford.edu\/entries\/artificial-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">Artificial general intelligence<\/a>, or AGI, refers to an AI capable of matching human intelligence. For many researchers, it is the ultimate goal. Major AI companies such as <a href=\"https:\/\/www.futura-sciences.com\/en\/code-red-at-openai-sam-altman-panics-as-googles-gemini-3-dominates-and-meta-scoops-experts_22807\/\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a> have made it clear that reaching this level is their mission, with some claiming we could get there within a decade \u2013 or even sooner. While most experts are betting on large language models, others argue that this architecture is too limited. Yann LeCun, for instance, left Meta to focus on so called \u2018world models\u2019, which he considers more promising.<\/p>\n<p>But what if everyone is wrong?<\/p>\n<p>In their Nature essay, philosopher Eddy Keming Chen and his colleagues at the University of California \u2013 specialists in linguistics, computer science, and data science \u2013 argue that AGI may already be here. Today\u2019s chatbots can pass the Turing test. Designed in 1950 by mathematician Alan Turing, the test places a human in written conversation with both another human and a machine, asking them to determine which is which. ChatGPT is now judged to be human more often than actual humans. A few years ago, that achievement alone might have been enough to label a system as AGI.<\/p>\n<p>Redefining intelligence: AGI vs superintelligence<\/p>\n<p>One of the central issues is that there is no universally accepted definition of AGI. The authors suggest that artificial intelligence should be understood as part of a broader category that includes humans. If that is the case, then intelligence does not require perfection or total mastery. No human can perform every task at expert level across all domains.<\/p>\n<p>They distinguish between AGI and superintelligence. In their view, AGI describes systems such as large language models that already demonstrate expert level performance in many fields. Superintelligence, by contrast, would vastly outperform humans in every domain.<\/p>\n<p>The researchers also address ten common objections. One of the most popular criticisms is that these systems are merely \u2018stochastic parrots\u2019, repeating patterns from their training data. Yet their ability to solve new mathematical problems and transfer skills across domains challenges that claim. Expecting revolutionary scientific breakthroughs, they argue, may be an unreasonable standard.<\/p>\n<p>Another objection is that language models lack a physical representation of the world. However, their capacity to predict the consequences of actions or to reason through physics problems suggests otherwise. AI systems are no longer limited to text \u2013 they now process images, audio, and video. While they do not have bodies, the authors contend that intelligence does not require one. With rapid advances in robotics, the era of \u2018physical AI\u2019 has already begun.<\/p>\n<p>They also reject the idea that autonomy or a persistent autobiographical memory is essential to general intelligence. It is true that AI systems require vast amounts of data and cannot, for instance, learn to drive with only twenty hours behind the wheel. But, they argue, what matters is the final performance \u2013 not how long it takes to get there.<\/p>\n<p>And what about hallucinations?<\/p>\n<p>The issue of AI hallucinations is addressed, though perhaps somewhat briefly. According to the authors, hallucination rates have declined in the latest model generations. Humans, after all, are also prone to cognitive biases and false memories. Still, hallucinations remain common, and some studies suggest they may even be increasing. OpenAI itself has acknowledged that even with GPT 5, roughly one in ten responses may contain a hallucination.<\/p>\n<p>In the end, the authors maintain that we have already achieved AGI. If we fail to recognise it, they argue, it may be because our understanding of intelligence is too narrow. Our anthropocentric bias prevents us from seeing the emergence of a new form of intelligence \u2013 one that is profoundly different from our own.<\/p>\n<p>That perspective may also explain why figures such as <a href=\"https:\/\/www.futura-sciences.com\/en\/the-superintelligence-unveiled-by-mark-zuckerberg-has-deeply-unsettled-experts_19338\/\" rel=\"nofollow noopener\" target=\"_blank\">Mark Zuckerberg<\/a> increasingly prefer to speak of superintelligence instead. Perhaps the debate is no longer about whether AGI has arrived, but about what we choose to call the intelligence now unfolding before us.<\/p>\n<p>\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"Artificial general intelligence, or AGI, refers to an AI capable of matching human intelligence. For many researchers, it&hellip;\n","protected":false},"author":2,"featured_media":10918,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[6744,2720,3013,1642],"class_list":{"0":"post-10917","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-agi","8":"tag-agi","9":"tag-ai-hallucinations","10":"tag-artificial-general-intelligence","11":"tag-large-language-models"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/10917","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=10917"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/10917\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/10918"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=10917"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=10917"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=10917"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}