{"id":17379,"date":"2026-04-26T13:58:14","date_gmt":"2026-04-26T13:58:14","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/17379\/"},"modified":"2026-04-26T13:58:14","modified_gmt":"2026-04-26T13:58:14","slug":"the-blogs-for-israel-to-win-in-ai-it-must-play-a-different-game-jj-ben-joseph","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/17379\/","title":{"rendered":"The Blogs: For Israel to win in AI it must play a different game | JJ Ben-Joseph"},"content":{"rendered":"<p class=\"text-token-text-primary leading-relaxed\">Let me first say this: this money heavy approach makes sense. It is grounded in real achievements. But there is a difference between respecting the phase that got us here and assuming it is the final map. The question is not whether scale matters. It does. The question is whether scale is the whole story.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">Brute force worked because the empirical results were hard to argue with. OpenAI\u2019s 2020 scaling-laws paper found clean power-law improvements as model size, dataset size, and training compute increased. DeepMind\u2019s Chinchilla work then showed that careful co-scaling of parameters and tokens could outperform much larger models at the same compute budget. They found measurable regularities that gave the industry permission to build bigger.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">That logic carried into the modern foundation-model era. The GPT-4 technical report described a large multimodal model whose performance could be predicted across scales, and scaling delivered the products that reset public expectations for AI software. These systems were proof that immense clusters and vast corpora could produce models that felt qualitatively new to the world.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">The compute and data believers therefore deserve respect. Even before launching SSI, Ilya Sutskever was known as one of the early champions of scaling, and Reuters noted that his ideas helped shape the investment wave in chips, data centers, and energy that now defines frontier AI. Any serious critique of the current race should start with that honesty: the people who bet on scale were right for a long time.\n<\/p>\n<p>But, scaling may not be enough<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">Success at one stage does not guarantee sufficiency at the next. Public human text is finite. One influential estimate from Epoch AI researchers concluded that, if prevailing trends continue, frontier models could run into the limits of public human-generated text sometime between 2026 and 2032. And the corpus is not just finite. It is increasingly contested, as copyright cases against OpenAI, Anthropic, Meta, Google, and others have turned training data into a legal battlefield over books, news, music, and other works.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">More important, next-token prediction is not the same thing as judgment. Toolformer framed the problem plainly: language models at scale still struggle with arithmetic and factual lookup, tasks where much smaller specialized systems can excel. ReAct showed that models reason better when they can act, query external sources, and revise plans instead of merely extending a text stream. OpenAI later said its models improve with explicit reasoning\u00a0and with more time spent thinking at inference, under constraints that differ substantially from classic pretraining. This is was insight that came from Sutskever himself.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">The most suggestive results now often come from systems that add structure, search, and feedback. AlphaGeometry bypassed the scarcity of human demonstrations by synthesizing millions of theorems and proofs, then combining a neural model with symbolic deduction to solve Olympiad geometry near gold-medalist level. FunSearch paired a pretrained language model with a rigorous evaluator and found new results in mathematics and algorithms.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">That is the deeper reason to doubt that the shortest route to superintelligence is simply \u201cmake the autocomplete engine bigger.\u201d The frontier keeps hinting that better reasoning loops, better tools, better environments, and better feedback may matter as much as raw pretraining scale.\n<\/p>\n<p>The Safe Superintelligence approach<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">This is what makes Safe Superintelligence Inc. so interesting. The company was founded by Ilya Sutskever, Daniel Gross, and Daniel Levy. On its official site, the company describes itself as a firm with offices in Palo Alto and Tel Aviv, says it has deep roots in both places, and says it is assembling a lean team focused on superintelligence and \u201cnothing else.\u201d The same statement says the company is a \u201cstraight-shot\u201d lab with one goal and one product: safe superintelligence.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">SSI has also been unusually clear about what it is not. According to Reuters, the company has no revenue, remains largely secretive about its technical method, and wants to \u201cscale in peace\u201d by insulating research from short-term commercial pressures.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">Sutskever told Reuters that its first product would be safe superintelligence. In practical terms, that means no chatbot treadmill, no API revenue target, and no enterprise software detour being mistaken for a research strategy. At the same time, SSI is not anti-infrastructure. Its first $1 billion round was explicitly earmarked in part for computing power and hiring.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">SSI raised $1 billion in 2024. By February 2025, it was reported to be discussing funding at a valuation above $20 billion. In April 2025, publications including CTech and Built In reported a further $2 billion raise at roughly a $32 billion valuation, despite no commercial product and no intent to produce one. Investors including Alphabet and Nvidia were also reported as backers, and Reuters said SSI had become a significant external customer for Google\u2019s TPUs.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">The subtext is even more interesting. It is a second move from one of the people who helped legitimize scaling in the first place. Reuters reported that Sutskever both saw the power of scaling early and later worried about ceilings imposed by dwindling data, while also helping launch OpenAI\u2019s reasoning-model direction. SSI therefore looks more like a second draft from one of the architects of the first one.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">In the expensive brute-force model, research can become constrained by standard product concerns which have nothing to with actually advancing the state of AI. OpenAI and Anthropic must worry about product cycles, distribution pressure, and the need to ship a slightly better assistant every quarter. In the straight-shot model, we can serve a tighter theory of intelligence.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">That tighter theory probably includes several ingredients. Better reasoning, where the system learns how to search, decompose, verify, and revise. Better agency, where it can plan over time instead of answering in one hop. Better tool use, where it calls code, search, formal systems, and simulators instead of pretending to be self-sufficient. Better synthetic environments, where scarcity of human data is replaced by generated tasks with exact feedback. Better scientific loops, where candidate ideas are checked by evaluators that can reject attractive nonsense. And better safety, not bolted on after capability, but built into the architecture, objectives, and deployment assumptions from the start.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">Maybe the decisive breakthrough will not look like an ever more larger data ceter of GPUs. Maybe it will look like a small team in Tel Aviv building systems that reason more faithfully, remember more usefully, and fail more legibly because they were designed to do those things.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">We should be humble here. SSI\u2019s actual research remains secret. But the contours of a plausible alternative are already visible in public work across reasoning, tool use, evaluator-guided search, and formal feedback, much of which was invented by the same people who created SSI.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">Israel can not win by being the biggest. Israel can only win by being sharper. Official Israeli policy documents cite Stanford\u2019s AI Index as placing Israel first globally in AI talent concentration, while the OECD says the country already has a strong AI industry and that AI now accounts for nearly half of new high-tech startups and funding rounds. A separate 2025 talent report estimated more than 2,100 AI companies in Israel, with roughly 77% employing 50 people or fewer.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">SSI\u2019s Tel Aviv office reflects a real judgment about where frontier talent lives. By early 2025, CTech reported that the company was expanding in Tel Aviv, hiring engineers, and leasing office space in Midtown Tel Aviv.\n<\/p>\n<p>The national opportunity<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">The OECD has warned that computing infrastructure is a weakness for Israel\u2019s AI sector. What is to be done? This is why Israel belongs in the straight-shot ecosystem.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">The national task, then, is to resist mimicking the biggest AI data center empires. It is to create conditions in which focused superintelligence work can happen locally. That starts with levering Israel\u2019s unique unfair advantage in talent.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">From there, the agenda becomes fairly clear. Fund frontier research fellowships that let a small number of exceptional people spend years on inventing superintelligence directly. Build security-cleared labs that combine safety research with cyber defense and resilient infrastructure. Tighten the bridge between universities, hospitals, defense units, and startups. Expand chip and systems programs so that Israel contributes to the hardware stack and gets first priority on frontier hardware. And make room for companies whose ambition is not another thin SaaS wrapper (<a href=\"https:\/\/blogs.timesofisrael.com\/yes-israel-is-lagging-in-ai-and-what-to-do-about-it\/\" target=\"_blank\" rel=\"noopener nofollow\">see my last article on why this approach will fail<\/a>) but deep technical work with long time horizons towards directly solving the AGI problem.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">The goal is to make Israel the sole place where safe superintelligence is technically shaped. Of course not every breakthrough will come from Israel, and not every Israeli AI company should try to become an SSI. But Israel should refuse the smaller role that is too often assigned to it, namely being a branch office for someone else\u2019s industry. The country has the ingredients to help invent the next one.\n<\/p>\n<p class=\"text-token-text-primary leading-relaxed\">The future of AI will not belong only to whoever buys the most GPUs or scrapes the most data. It may belong to whoever finds the shortest path from where we are now to a truly general artificial intelligence agent. SSI\u2019s presence in Tel Aviv should be read in that light. It is a signal, especially to Israel, that this country does not need to be the largest actor in the race to matter. It needs to be the sharpest.<\/p>\n","protected":false},"excerpt":{"rendered":"Let me first say this: this money heavy approach makes sense. It is grounded in real achievements. But&hellip;\n","protected":false},"author":2,"featured_media":17380,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,859,25],"class_list":{"0":"post-17379","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-ai-artificial-intelligence","10":"tag-artificial-intelligence"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/17379","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=17379"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/17379\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/17380"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=17379"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=17379"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=17379"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}