{"id":8788,"date":"2026-04-20T22:30:29","date_gmt":"2026-04-20T22:30:29","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/8788\/"},"modified":"2026-04-20T22:30:29","modified_gmt":"2026-04-20T22:30:29","slug":"openai-says-70-to-agi-a-prominent-cognitive-scientist-says-were-nowhere-close","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/8788\/","title":{"rendered":"OpenAI says 70% to AGI. A prominent cognitive scientist says we&#8217;re nowhere close\u00a0"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"alignright wp-image-81386\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/jagged-300x195.png\" alt=\"\" width=\"500\" height=\"325\"  \/>Depending on who you ask in Silicon Valley or beyond, AGI, artificial general intelligence, is either a finished product, a work in progress, or a total misunderstanding of what intelligence actually is.<\/p>\n<p>\u201cI\u2019d say I\u2019m basically 70 to 80% there,\u201d OpenAI co-founder and president Greg Brockman said this week, describing how close he believes we are to AGI, AI that can match or exceed human-level performance across virtually any intellectual task. \u201cI think it\u2019s extremely clear that we are going to have AGI within the next couple years in a way that is still going to be jagged, but that the floor of task will just be almost for any intellectual task of how you use your computer, the AI will be able to do that.\u201d<\/p>\n<p>His remarks draw from a line from OpenAI co-founder <a href=\"https:\/\/x.com\/karpathy\/status\/1816531576228053133\" rel=\"nofollow\">Andrej Karpathy<\/a> who noted that state-of-the-art LLMs can both \u201cperform extremely impressive tasks (e.g. solve complex math problems)\u201d while simultaneously struggling with \u201csome very dumb problems.\u201d<\/p>\n<p>Meanwhile, <a href=\"https:\/\/www.youtube.com\/watch?v=awE5DV-M48o\" rel=\"nofollow noopener\" target=\"_blank\">NVIDIA CEO Jensen Huang recently went further, telling Lex Fridman that AGI has already been achieved<\/a>, though his working definition was modest: an AI agent that creates a briefly viral app worth a billion dollars before it \u201ckind of dies away.\u201d Building an actual Nvidia? \u201cThe odds of 100,000 of those agents building Nvidia is 0%,\u201d Huang conceded.<\/p>\n<p>But Gary Marcus, cognitive scientist, NYU professor emeritus, and one of the AI field\u2019s most devoted curmudgeons, who <a href=\"https:\/\/en.wikipedia.org\/wiki\/Gary_Marcus\" rel=\"nofollow noopener\" target=\"_blank\">also built and sold an AI company to Uber<\/a>, says the entire framing is wrong.<\/p>\n<p>\u201cWe as a society are placing truly massive bets around the premise that AGI is close,\u201d Marcus said in an October 2025 keynote at the Royal Society in London,<a href=\"https:\/\/www.youtube.com\/watch?v=s-qKiBjabY0&amp;t=142s\" rel=\"nofollow noopener\" target=\"_blank\"> posted online in March 2026<\/a>. \u201cI am talking about literally a trillion dollar bet.\u201d The bet, he argues, rests on a confusion: \u201cLarge language models are deeply flawed imitators that are preying on the Eliza effect,\u201d which is a psychological phenomenon in which users treat programs as if they have human-like capabilities.<\/p>\n<p>\u2018Spud\u2019 and the scaling argument<\/p>\n<p>OpenAI has reportedly just completed a new pre-training run codenamed Spud, and he sees it as the distillation of years of research. \u201cWe have maybe two years\u2019 worth of research that is coming to fruition in this model,\u201d Brockman told <a href=\"https:\/\/www.youtube.com\/watch?v=J6vYvk7R190&amp;t=2158s\" rel=\"nofollow noopener\" target=\"_blank\">Big Technology\u2019s Alex Kantrowitz<\/a>. \u201cIt\u2019s going to be very exciting.\u201d<\/p>\n<p>He said OpenAI had achieved something of a domino effect with its latest AI model, where a degree of recursion simplifies model development: \u201cWhen we improve the pre-training, it makes all the other steps much easier. It\u2019s a model that is more capable to start. When it\u2019s trying out different ideas and learning from its own mistakes, that process just is faster. It needs to make fewer mistakes.\u201d<\/p>\n<p>The models, he said, are already transforming real work. \u201cNew model releases really went from the AI being able to do like 20% of your tasks to like 80%. And that was this massive shift because it went from being kind of a nice thing to do, to you absolutely need to retool your workflow around these AIs.\u201d<\/p>\n<p>\u201cWe had this result recently where a physicist had been working on a problem for some time. He gave it to our model. 12 hours later we have a solution,\u201d Brockman said. \u201cAnd he said this is the first time he\u2019d seen a model where he felt like it was thinking; that this is a problem that maybe humanity would never solve and our AI solved it.\u201d<\/p>\n<p>A contrarian take<\/p>\n<p>Marcus has heard this pitch before. For more than two decades, in fact. \u201cWhat I\u2019ve heard for a quarter century is \u2018we\u2019re working on it, we\u2019re going to solve it next year with a little bit more data,&#8217;\u201d Marcus said. \u201cI think by now maybe we can see that that\u2019s not really the case.\u201d His argument is that scaling, the principle that more data and more compute reliably produce smarter models, is an empirical trend that has already begun to fade.<\/p>\n<p>\u201cSam Altman talks about scaling laws like they\u2019re a property of the universe. They aren\u2019t. They\u2019re empirical observations, like Moore\u2019s Law. And Moore\u2019s Law ran out,\u201d Marcus said. He pointed to <a href=\"https:\/\/ai.meta.com\/blog\/llama-4-multimodal-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">Meta\u2019s Llama 4<\/a>, which \u201cjust wasn\u2019t that good\u201d and \u201cfell off the curve.\u201d In any case, the model was <a href=\"https:\/\/www.zdnet.com\/article\/metas-llama-4-herd-controversy-and-ai-contamination-explained\/\" rel=\"nofollow noopener\" target=\"_blank\">controversial<\/a>. And then GPT-5: \u201cAltman spent three years campaigning for this\u2026 when it came out, it was just okay.\u201d<\/p>\n<p>Here, Marcus is being charitable. At launch, GPT-5 was met with <a href=\"https:\/\/www.platformer.news\/gpt-5-backlash-openai-lessons\/\" rel=\"nofollow noopener\" target=\"_blank\">considerable backlash from some quarters<\/a>. Marcus predicted GPT-5 would be underwhelming, and for years had signaled that scaling hypothesis was running out of steam.<\/p>\n<p>\u201cI argued in 2022 in a paper that basically got me excommunicated from the AI field, I\u2019m not even kidding, that scaling was not a law of the universe and that it was not going to solve the core problems,\u201d Marcus said referring to his <a href=\"https:\/\/nautil.us\/deep-learning-is-hitting-a-wall-238440\" rel=\"nofollow noopener\" target=\"_blank\">Nautilus publication<\/a>.<\/p>\n<p>Then, in a development Marcus relishes, one of the field\u2019s most respected voices came around. <a href=\"https:\/\/en.wikipedia.org\/wiki\/Richard_S._Sutton\" rel=\"nofollow noopener\" target=\"_blank\">Richard Sutton<\/a>, the reinforcement learning pioneer whose influential essay \u201c<a href=\"http:\/\/www.incompleteideas.net\/IncIdeas\/BitterLesson.html\" rel=\"nofollow noopener\" target=\"_blank\">The Bitter Lesson<\/a>\u201d argued that brute-force compute always wins, publicly reversed course on LLMs.<\/p>\n<p>\u201cYou were never alone, Gary, though you were the first to bite the bullet, to fight the good fight and make the argument well, again and again, for the limitations of LLMs,\u201d Sutton wrote. \u201cI salute you for this good service.\u201d<\/p>\n<p>\u201cThis is the man who invented the bitter lesson,\u201d Marcus told the audience, \u201cand he came to see that LLMs were not it.\u201d<\/p>\n<p>The jaggedness question<\/p>\n<p>Brockman acknowledges that current models are uneven.\u00a0\u201cThe technology we have right now is very jagged. It is absolutely superhuman at many tasks. When it comes to writing code, those kinds of things, the AI can just do it. But there\u2019s some very basic tasks that a human can do that our AI still struggles with.\u201d<\/p>\n<p>Brockman described an engineer on his team who went from being unable to get the AI to handle \u201clow-level hardcore systems engineering\u201d to giving it a design doc and watching it \u201cactually implement it, add metrics, observability, run the profiler, improve it to the point that it\u2019s the exact thing that he was hoping to produce.\u201d The change came between model versions. \u201cIt\u2019s almost slowly, slowly, slowly, all at once.\u201d<\/p>\n<p>\u201cAnybody who studies these systems systematically, especially if they have a background in cognitive science, will realize that they do stupid things all the time,\u201d Marcus said. He rattled off examples: models that can\u2019t label parts of an elephant, that produce five hands when asked for three, that generate wiring diagrams \u201cthat might actually kill people if they took them seriously.\u201d<\/p>\n<p>The problem, Marcus said, is structural. \u201cWhen you take these systems out of the kinds of things they\u2019ve been trained on, they can often have problems. I wrote about this in 1998. We\u2019ve seen this over and over and over again.\u201d<\/p>\n<p>One model or many minds?<\/p>\n<p>Brockman is betting on unification: one model architecture that handles everything, and OpenAI is currently building a so-called \u201csuperapp\u201d that fuses disparate AI functions together. \u201cThe pretty wild thing about what AGI is, is that sometimes these very different-looking applications, between speech to speech, image generation, text, it\u2019s all kind of one model and we just sort of tweak that in slightly different ways.\u201d<\/p>\n<p>OpenAI pulled back from Sora, its video generation model, precisely because it ran on a different tech tree\u2026 and was very expensive. \u201cIf you branch too far and you have two different artifacts, that is very hard to sustain in a world where there is limited compute,\u201d Brockman said. The future, in his view, is \u201cone AI layer that can be pointed at specific applications in a very thin way.\u201d<\/p>\n<p>Marcus rejects this premise. \u201cThere\u2019s no one way the mind works because the mind is not one thing,\u201d he said, quoting two cognitive psychologists. \u201cInstead, the mind has parts, and different parts of the mind operate in different ways.\u201d<\/p>\n<p>\u201cThe whole LLM hypothesis is that the mind works in one way: by using this thing called attention and looking over large amounts of data and doing sequence prediction. This is just one of the things the mind does.\u201d<\/p>\n<p>His alternative? Modularity and hybrid architectures. \u201cThe very idea of AGI, of artificial general intelligence, might not even be the right goal, at least right now.\u201d<\/p>\n<p>His exhibit A: AlphaFold 3, DeepMind\u2019s <a href=\"https:\/\/deepmind.google\/science\/alphafold\/\" rel=\"nofollow noopener\" target=\"_blank\">protein structure prediction system<\/a>. \u201cMy favorite example of any AI that\u2019s ever been built. It does not try to write sonnets or evaluate sonnets. All it does is take a sequence of DNA nucleotides and predict where a bunch of atoms will be. It\u2019s about as specialized as you can get, and it\u2019s fantastic. Every biologist in the world knows about it and many of them use it every day.\u201d<\/p>\n<p>\u201cMaybe we don\u2019t need AGI right now,\u201d Marcus said.<\/p>\n","protected":false},"excerpt":{"rendered":"Depending on who you ask in Silicon Valley or beyond, AGI, artificial general intelligence, is either a finished&hellip;\n","protected":false},"author":2,"featured_media":8789,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[6744,636,7561,2792,3013,25,7562,6695,7563,2140,2446,2225,7564,7565,7566,58,157,7567,7568,7569],"class_list":{"0":"post-8788","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-agi","8":"tag-agi","9":"tag-ai-development","10":"tag-alphafold-3","11":"tag-andrej-karpathy","12":"tag-artificial-general-intelligence","13":"tag-artificial-intelligence","14":"tag-bitter-lesson","15":"tag-cognitive-science","16":"tag-gary-marcus","17":"tag-greg-brockman","18":"tag-jensen-huang","19":"tag-llms","20":"tag-modularity","21":"tag-moores-law","22":"tag-neural-networks","23":"tag-nvidia","24":"tag-openai","25":"tag-richard-sutton","26":"tag-scaling-laws","27":"tag-spud"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/8788","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=8788"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/8788\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/8789"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=8788"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=8788"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=8788"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}