{"id":150615,"date":"2025-10-29T01:53:18","date_gmt":"2025-10-29T01:53:18","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/150615\/"},"modified":"2025-10-29T01:53:18","modified_gmt":"2025-10-29T01:53:18","slug":"neural-dispatch-agentic-ais-lack-of-intelligence-a-deepseek-moment-and-nvidias-ai-supercomputer-ht-tech","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/150615\/","title":{"rendered":"Neural Dispatch: Agentic AI\u2019s lack of intelligence, a DeepSeek moment, and Nvidia\u2019s AI supercomputer \n(HT Tech)"},"content":{"rendered":"\n<p><strong>Cognitive warmup.<\/strong> Is this another DeepSeek moment for AI to contend with? Chinese tech giants Alibaba says their Alibaba Cloud platform has successfully tested a new compute pooling system called Aegaeon, which has reduced the number of Nvidia H20 GPUs required to serve dozens of models of up to 72-billion parameters, from 1,192 to 213 GPUs. This is according to a research paper presented this week at the 31st Symposium on Operating Systems Principles (SOSP) in Seoul, South Korea.\u201dAegaeon is the first work to reveal the excessive costs associated with serving concurrent LLM workloads on the market,\u201d researchers from Peking University and Alibaba Cloud wrote in <a rel=\"nofollow noopener\" href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3731569.3764815?utm_source=chatgpt.com\" target=\"_blank\" class=\"backlink\" data-vars-page-type=\"story\" data-vars-link-type=\"Manual\" data-vars-anchor-text=\"the paper\">the paper<\/a>. This may well be another example of working smarter and not necessarily spending on massive compute infrastructure \u2013 exactly something a lot of AI companies are intently talking about the past few weeks. Alibaba\u2019s methodology means a single GPU can support up to seven models for users to call upon, compared to a maximum of three models under alternative systems, researchers point out. .Investors will soon begin to realise this.<\/p>\n<p>     <img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2025\/10\/Cognitive_Warmup-_AI_models_GPUs_1761676987912_1761677014243.png\" alt=\" AI models GPUs\" title=\" AI models GPUs\"\/>   AI models GPUs    ALGORITHM<\/p>\n<p>This week, we talk about Microsoft trying to find a more stable AI foundation that doesn\u2019t have to rely on OpenAI as much, Nvidia\u2019s \u2018personal AI supercomputer\u2019 that costs a pretty penny for your AI obsession, and OpenAI tells us again its concerned about the well-being of humanity in general.<\/p>\n<p>  Microsoft is redrawing AI alliances  <img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2025\/10\/Algorithm_-_Microsoft_MAI-Image1_1761677135975.png\" class=\"lazy\" alt=\"Microsoft MAI-Image1\" title=\"Microsoft MAI-Image1\"\/> Microsoft MAI-Image1  <\/p>\n<p>Microsoft has introduced a new text-to-image model called MAI-Image-1, marking its first generative image system that\u2019s been developed in-house. It\u2019s a clear move away from reducing reliance on partners such as OpenAI, and perhaps even a signal (among many others; the widening scope of Microsoft\u2019s work with Anthropic an indicator too) that the Redmond-based tech giant want to have a stronger foundation to work with, and have more of a stake in the overall creative stack. Early testing has returned impressive results on AI benchmarks, which is always a good sign if generative foundations such as lighting, texture, and realism, aren\u2019t completely off from the outset. You\u2019d be surprised how many image generators often fall short, even now. Strategically, it\u2019s a big deal \u2014 a potential for tighter integration into Copilot which Microsoft is talking up a lot off late, and makes a case for better control of intellectual property. As generative image tools become commoditised, control and quality will separate the winners from the noise.<\/p>\n<p> Nvidia, a personal AI supercomputer, and Apple  <img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2025\/10\/Algorithm_-_Nvidia_AI_supercomputer_1761677293196.jpeg\" class=\"lazy\" alt=\"Nvidia AI supercomputer\" title=\"Nvidia AI supercomputer\"\/> Nvidia AI supercomputer  <\/p>\n<p>There was considerable excitement this past week when Nvidia began to ship its compact AI computer, the DGX Spark, meant for developers and researchers. Priced at $3,999 (that\u2019d be around \u20b93,82.000 onwards), it packs some serious power with a Grace-Blackwell GB10 superchip and 128 GB unified memory delivering up to a petaflop of inferencing performance. This means models with hundreds of billions of parameters can now be trained or fine-tuned right on a desktop. Powerful AI compute hardware is coming to your desk. But I\u2019ve been wondering, didn\u2019t we already have similar levels of compute available (and therefore Nvidia isn\u2019t breaking new ground here) in the form of the Apple Mac Studio with the M1 Max and the Mac Mini with an M4 Pro? Chanced upon a nice graphic by <a rel=\"nofollow noopener\" href=\"https:\/\/lmsys.org\/\" target=\"_blank\" class=\"backlink\" data-vars-page-type=\"story\" data-vars-link-type=\"Manual\" data-vars-anchor-text=\"lmsys.org\">lmsys.org<\/a> comparing benchmarks, which indicate that an M1 Max Mac Studio (this is priced around $2,000) consistently returns higher output tokens per second than Nvidia\u2019s computing foray. Even the really compact form factor of the Mac Mini (this is around $1,400) matches the DGX Spark. Makes one wonder, what the thought was&#8230;<\/p>\n<p> AI therapy by day, AI temptation by night?<\/p>\n<p>OpenAI has established what it calls an Expert Council on Wellbeing and AI. This will consist of eight specialists tasked with studying how constant interaction with AI systems affects human emotion, cognition, and behaviour. The focus is on defining what \u201chealthy AI use\u201d means across contexts, from education to therapy to everyday chatbots. The council will advise on design, ethics, and potential behavioural side effects. Is it a sign that the AI conversation is maturing? Unlikely. But it is clear OpenAI tends to bring us back to the smarter systems and impact conversations quite often, to keep investors, consumers and perhaps even policymakers happy. This is the same company that wants to make AI sexting mainstream, mind you. AGI can wait, porn is the revenue generator for the immediate future?<\/p>\n<p> THINKING  <img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2025\/10\/Thinking_-_Andrej_Karpathy_1761677441625.jpg\" class=\"lazy\" alt=\"Andrej Karpathy\" title=\"Andrej Karpathy\"\/> Andrej Karpathy    <img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2025\/10\/quoteText.png\" alt=\"\"\/>  <\/p>\n<p>\u201cThey just don\u2019t work. They don\u2019t have enough intelligence, they\u2019re not multimodal enough, they can\u2019t do computer use and all this stuff. They don\u2019t have continual learning. You can\u2019t just tell them something and they\u2019ll remember it. They\u2019re cognitively lacking and it\u2019s just not working. It will take about a decade to work through all of those issues.\u201d \u2014 Andrej Karpathy, ex-OpenAI, on the Dwarkesh Podcast.<\/p>\n<p>Andrej Karpathy has taken a pin to the AI bubble. He estimates artificial general intelligence (AGI) is at least 10 years away, that the code generated by today\u2019s models include considerable \u2018slop\u2019, and argues the approach should be smaller models that have better memorisation and context. In case you\u2019re in the mood to question Karpathy\u2019s credentials just because he may not be aligning with your world view on all things AI and how AI agents would be better than humans, here something to ponder \u2014 he\u2019s a research scientist, a founding member at OpenAI, was Senior Director of AI at Tesla, and has since founded Eureka Labs which focused on education aligned AI.<\/p>\n<p><strong>The Context:<\/strong> Andrej Karpathy\u2019s words hit at the core of the current AI narrative \u2014 and they do so from someone who has actually built the systems now being mythologised. His comments come at a time when \u201cAI agents\u201d have become Silicon Valley\u2019s new obsession, marketed as the next big leap beyond chatbots, and better than humans in the workplace. But his verdict is sobering. The technology simply isn\u2019t ready. In his view, today\u2019s large language models are still pattern mimics, not thinkers. They can recall, summarise, and respond impressively within a conversation, but lack the cognitive infrastructure that makes intelligence continuous and cumulative. They don\u2019t truly remember; they merely reference transient context windows. They don\u2019t understand multimodality; they correlate text and image tokens without an integrated sense of reasoning across them. They don\u2019t learn from experience; they only re-perform from training.<\/p>\n<p>The current hype cycle around agentic AI, that can act on behalf of users, browse the web, execute commands, and make autonomous decisions, simply does enough to paper over structural gaps. Every AI company\u2019s demos look polished, there is an illusion of progress with tall claims, but the foundation seems far from general intelligence.<\/p>\n<p><strong>A Reality Check:<\/strong> Karpathy\u2019s decade-long timeline feels almost radical in an industry addicted to trying and being seen showing off quarterly breakthroughs. He\u2019s not saying progress will stall, only that the essential leap from simulation to cognition will take far longer than investors or AI bosses are willing to admit. True agents will need persistent memory, real-time learning, cross-modal reasoning, and safe self-correction, all of which demand breakthroughs in model architecture, data efficiency, and energy cost. And perhaps even the size of the models, with larger not always better.<\/p>\n<p>For AI companies, every new model is somehow always more aligned, more efficient, and more multimodal, than the previous one which was supposedly the greatest already. We are simply making the best, better? It is all boxed within a narrow frame of competence, because the reality is they can\u2019t sustain context or evolve behaviour the way humans do naturally.<\/p>\n<p>For enterprises, this perspective matters. It suggests that AI, for now, should be positioned as one piece in the puzzle, not in a position to lead amplification or replacement, for anything that requires human creativity, insight, and productivity. Automate the basic tasks, that\u2019s all. Overpromising \u201cautonomous agents\u201d risks frustration and mistrust, especially when systems inevitably fail at continuity or nuance. In that sense, Karpathy\u2019s realism isn\u2019t cynicism. It challenges the industry to focus less on hype and more on foundational science: memory systems, continual learning, interpretability, and integration with real-world perception.<\/p>\n<p>Neural Dispatch is your weekly guide to the rapidly evolving landscape of artificial intelligence. Each edition delivers curated insights on breakthrough technologies, practical applications, and strategic implications shaping our digital future.<\/p>\n","protected":false},"excerpt":{"rendered":"Cognitive warmup. Is this another DeepSeek moment for AI to contend with? Chinese tech giants Alibaba says their&hellip;\n","protected":false},"author":2,"featured_media":150616,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[74],"tags":[88337,88336,24532,8736,18,19,17,292,2428,82],"class_list":{"0":"post-150615","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technology","8":"tag-ai-developments","9":"tag-ai-supercomputer","10":"tag-alibaba-cloud","11":"tag-deepseek","12":"tag-eire","13":"tag-ie","14":"tag-ireland","15":"tag-nvidia","16":"tag-south-korea","17":"tag-technology"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@ie\/115454962143832088","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/150615","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=150615"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/150615\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/150616"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=150615"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=150615"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=150615"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}