{"id":324922,"date":"2025-10-22T22:27:09","date_gmt":"2025-10-22T22:27:09","guid":{"rendered":"https:\/\/www.europesays.com\/us\/324922\/"},"modified":"2025-10-22T22:27:09","modified_gmt":"2025-10-22T22:27:09","slug":"why-coheres-ex-ai-research-lead-is-betting-against-the-scaling-race","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/324922\/","title":{"rendered":"Why Cohere&#8217;s ex-AI research lead is betting against the scaling race"},"content":{"rendered":"<p id=\"speakable-summary\" class=\"wp-block-paragraph\">AI labs are racing to build data centers <a rel=\"nofollow noopener\" href=\"https:\/\/www.theguardian.com\/technology\/2025\/jul\/16\/zuckerberg-meta-data-center-ai-manhattan\" target=\"_blank\">as large as Manhattan,<\/a> each costing billions of dollars and consuming as much energy as a small city. The effort is driven by a deep belief in \u201cscaling\u201d \u2014 the idea that adding more computing power to existing AI training methods will eventually yield superintelligent systems capable of performing all kinds of tasks.<\/p>\n<p class=\"wp-block-paragraph\">But a growing chorus of AI researchers say the scaling of large language models may be reaching its limits, and that other breakthroughs may be needed to improve AI performance.<\/p>\n<p class=\"wp-block-paragraph\">That\u2019s the bet Sara Hooker, Cohere\u2019s former VP of AI Research and a Google Brain alumna, is taking with her new startup, <a rel=\"nofollow noopener\" href=\"https:\/\/adaptionlabs.ai\/\" target=\"_blank\">Adaption Labs<\/a>. She co-founded the company with fellow Cohere and Google veteran Sudip Roy, and it\u2019s built on the idea that scaling LLMs has become an inefficient way to squeeze more performance out of AI models. Hooker, who left Cohere in August, <a rel=\"nofollow\" href=\"https:\/\/x.com\/sarahookr\/status\/1975581548121628920\">quietly announced<\/a> the startup this month to start recruiting more broadly.<\/p>\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\">\n<p lang=\"en\" dir=\"ltr\">I&#8217;m starting a new project.<\/p>\n<p>Working on what I consider to be the most important problem: building thinking machines that adapt and continuously learn. <\/p>\n<p>We have incredibly talent dense founding team + are hiring for engineering, ops, design. <\/p>\n<p>Join us: <a rel=\"nofollow\" href=\"https:\/\/t.co\/eKlfWAfuRy\">https:\/\/t.co\/eKlfWAfuRy<\/a><\/p>\n<p>\u2014 Sara Hooker (@sarahookr) <a rel=\"nofollow noopener\" href=\"https:\/\/twitter.com\/sarahookr\/status\/1975581548121628920?ref_src=twsrc%5Etfw\" target=\"_blank\">October 7, 2025<\/a><\/p><\/blockquote>\n<p class=\"wp-block-paragraph\">In an interview with TechCrunch, Hooker says Adaption Labs is building AI systems that can continuously adapt and learn from their real-world experiences, and do so extremely efficiently. She declined to share details about the methods behind this approach or whether the company relies on LLMs or another architecture.<\/p>\n<p class=\"wp-block-paragraph\">\u201cThere is a turning point now where it\u2019s very clear that the formula of just scaling these models \u2014 scaling-pilled approaches, which are attractive but extremely boring \u2014 hasn\u2019t produced intelligence that is able to navigate or interact with the world,\u201d said Hooker.<\/p>\n<p class=\"wp-block-paragraph\">Adapting is the \u201cheart of learning,\u201d according to Hooker. For example, stub your toe when you walk past your dining room table, and you\u2019ll learn to step more carefully around it next time. AI labs have tried to capture this idea through reinforcement learning (RL), which allows AI models to learn from their mistakes in controlled settings. However, today\u2019s RL methods don\u2019t help AI models in production \u2014 meaning systems already being used by customers \u2014 to learn from their mistakes in real time. They just keep stubbing their toe.<\/p>\n<p class=\"wp-block-paragraph\">Some AI labs offer consulting services to help enterprises fine-tune their AI models to their custom needs, but it comes at a price. OpenAI reportedly requires customers to <a rel=\"nofollow noopener\" href=\"https:\/\/www.theinformation.com\/articles\/openai-takes-page-palantir-doubles-consulting-services?rc=dp0mql\" target=\"_blank\">spend upwards of $10 million<\/a> with the company to offer its consulting services on fine-tuning.<\/p>\n<p>Techcrunch event<\/p>\n<p>\n\t\t\t\t\t\t\t\t\tSan Francisco<br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t|<br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\tOctober 27-29, 2025\n\t\t\t\t\t\t\t<\/p>\n<p class=\"wp-block-paragraph\">\u201cWe have a handful of frontier labs that determine this set of AI models that are served the same way to everyone, and they\u2019re very expensive to adapt,\u201d said Hooker. \u201cAnd actually, I think that doesn\u2019t need to be true anymore, and AI systems can very efficiently learn from an environment. Proving that will completely change the dynamics of who gets to control and shape AI, and really, who these models serve at the end of the day.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Adaption Labs is the latest sign that the industry\u2019s faith in scaling LLMs is wavering. A recent paper from MIT researchers found that the world\u2019s largest AI models <a rel=\"nofollow noopener\" href=\"https:\/\/www.wired.com\/story\/the-ai-industrys-scaling-obsession-is-headed-for-a-cliff\/\" target=\"_blank\">may soon show diminishing returns.<\/a> The vibes in San Francisco seem to be shifting, too. The AI world\u2019s favorite podcaster, Dwarkesh Patel, recently hosted some unusually skeptical conversations with famous AI researchers.<\/p>\n<p class=\"wp-block-paragraph\">Richard Sutton, a Turing award winner regarded as \u201cthe father of RL,\u201d told Patel in September that <a rel=\"nofollow noopener\" href=\"https:\/\/www.youtube.com\/watch?v=21EYKqUsPfg\" target=\"_blank\">LLMs can\u2019t truly scale<\/a> because they don\u2019t learn from real world experience. This month, early OpenAI employee Andrej Karpathy told Patel he <a rel=\"nofollow noopener\" href=\"https:\/\/www.youtube.com\/watch?v=lXUZvyajciY\" target=\"_blank\">had reservations<\/a> about the longterm potential of RL to improve AI models.<\/p>\n<p class=\"wp-block-paragraph\">These types of fears aren\u2019t unprecedented. In late 2024, <a href=\"https:\/\/techcrunch.com\/2024\/11\/20\/ai-scaling-laws-are-showing-diminishing-returns-forcing-ai-labs-to-change-course\/\" target=\"_blank\" rel=\"noopener\">some AI researchers raised concerns<\/a> that scaling AI models through pretraining \u2014 in which AI models learn patterns from heaps of datasets \u2014 was hitting diminishing returns. Until then, pretraining had been the secret sauce for OpenAI and Google to improve their models.<\/p>\n<p class=\"wp-block-paragraph\">Those pretraining scaling concerns are now <a href=\"https:\/\/techcrunch.com\/2025\/02\/27\/openai-unveils-gpt-4-5-orion-its-largest-ai-model-yet\/\" target=\"_blank\" rel=\"noopener\">showing up in the data<\/a>, but the AI industry has found other ways to improve models. In 2025, breakthroughs around AI reasoning models, which take additional time and computational resources to work through problems before answering, have pushed the capabilities of AI models even further.<\/p>\n<p class=\"wp-block-paragraph\">AI labs seem convinced that scaling up RL and AI reasoning models are the new frontier. OpenAI researchers previously told TechCrunch that <a href=\"https:\/\/techcrunch.com\/2025\/08\/03\/inside-openais-quest-to-make-ai-do-anything-for-you\/\" target=\"_blank\" rel=\"noopener\">they developed their first AI reasoning model<\/a>, o1, because they thought it would scale up well. Meta and Periodic Labs researchers recently <a rel=\"nofollow\" href=\"https:\/\/x.com\/Devvrit_Khatri\/status\/1978864275658871099\">released a paper <\/a>exploring how RL could scale performance further \u2014 a study that reportedly <a rel=\"nofollow\" href=\"https:\/\/x.com\/agarwl_\/status\/1978874743680843886\">cost more than $4 million,<\/a> underscoring how expensive current approaches remain.<\/p>\n<p class=\"wp-block-paragraph\">Adaption Labs, by contrast, aims to find the next breakthrough, and prove that learning from experience can be far cheaper. The startup was in talks to raise a $20 million to $40 million seed round earlier this fall, according to three investors who reviewed its pitch decks. They say the round has since closed, though the final amount is unclear. Hooker declined to comment.<\/p>\n<p class=\"wp-block-paragraph\">\u201cWe\u2019re set up to be very ambitious,\u201d said Hooker, when asked about her investors.<\/p>\n<p class=\"wp-block-paragraph\">Hooker previously led Cohere Labs, where she trained small AI models for enterprise use cases. Compact AI systems now routinely outperform their larger counterparts on coding, math, and reasoning benchmarks \u2014 a trend Hooker wants to continue pushing on.<\/p>\n<p class=\"wp-block-paragraph\">She also built a reputation for broadening access to AI research globally, hiring research talent from underrepresented regions such as Africa. While Adaption Labs will open a San Francisco office soon, Hooker says she plans to hire worldwide.<\/p>\n<p class=\"wp-block-paragraph\">If Hooker and Adaption Labs are right about the limitations of scaling, the implications could be huge. Billions have already been invested in scaling LLMs, with the assumption that bigger models will lead to general intelligence. But it\u2019s possible that true adaptive learning could prove not only more powerful \u2014 but far more efficient.<\/p>\n<p class=\"wp-block-paragraph\">Marina Temkin contributed reporting.<\/p>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"AI labs are racing to build data centers as large as Manhattan, each costing billions of dollars and&hellip;\n","protected":false},"author":3,"featured_media":324923,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[691,92745,69139,31165,36386,738,133041,31559,158,67,132,68],"class_list":{"0":"post-324922","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-ai-model","10":"tag-ai-progress","11":"tag-ai-research","12":"tag-ai-training","13":"tag-artificial-intelligence","14":"tag-cohere","15":"tag-scaling","16":"tag-technology","17":"tag-united-states","18":"tag-unitedstates","19":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/115420177520558258","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/324922","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=324922"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/324922\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/324923"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=324922"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=324922"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=324922"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}