{"id":190435,"date":"2025-06-17T01:35:15","date_gmt":"2025-06-17T01:35:15","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/190435\/"},"modified":"2025-06-17T01:35:15","modified_gmt":"2025-06-17T01:35:15","slug":"sandia-deploys-spinnaker2-neuromorphic-system","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/190435\/","title":{"rendered":"Sandia Deploys SpiNNaker2 Neuromorphic System"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/06\/SpiNNcloud-chips-1030x438.png\" alt=\"\" title=\"SpiNNcloud chips\"\/><\/p>\n<p>Some heavy hitters like <a href=\"https:\/\/www.nextplatform.com\/2021\/09\/30\/a-first-look-at-intels-next-level-neuromorphic-engine\/\" target=\"_blank\" rel=\"noopener\">Intel<\/a>, <a href=\"https:\/\/www.nextplatform.com\/2018\/09\/27\/a-rare-peek-into-ibms-true-north-neuromorphic-chip\/\" target=\"_blank\" rel=\"noopener\">IBM<\/a>, and Google along with a growing number of smaller startups for the past couple of decades have been pushing the development of neuromorphic computing, hardware that looks to mimic the structure and function of the human brain.<\/p>\n<p>It\u2019s a subject that The Next Platform has spent a lot of time tracking its development, from possible <a href=\"https:\/\/www.nextplatform.com\/2017\/08\/02\/os-neuromorphic-computing-von-neumann-devices\/\" target=\"_blank\" rel=\"noopener\">operating systems<\/a> for the systems and <a href=\"https:\/\/www.nextplatform.com\/2017\/06\/26\/u-s-military-sees-future-neuromorphic-computing\/\" target=\"_blank\" rel=\"noopener\">military interest<\/a> to the <a href=\"https:\/\/www.nextplatform.com\/2022\/05\/11\/neuromorphic-computing-will-need-partners-to-break-into-the-datacenter\/\" target=\"_blank\" rel=\"noopener\">need for partners<\/a> and <a href=\"https:\/\/www.nextplatform.com\/2022\/09\/30\/software-not-hardware-will-drive-quantum-and-neuromorphic-computing\/\" target=\"_blank\" rel=\"noopener\">software<\/a> to help bring useful neuromorphic computing to reality. As with other computing paradigms \u2013 quantum comes to mind \u2013 these things take time.<\/p>\n<p>That said, sentiment in recent months seemingly has shifted from the technology itself to what will help break it out of its niche status and into the mainstream. Proof-of-concepts are being run and systems are scaling. A goal now appears to be finding that so-called \u201ckiller app,\u201d the use case that will propel the technology forward.<\/p>\n<p>Potential abounds. In a <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-08253-8\" target=\"_blank\" rel=\"noopener\">paper in Nature<\/a> in January argued that neuromorphic computing is ready to make the leap into production environments, and in <a href=\"https:\/\/www.nature.com\/articles\/s41467-025-57352-1\" target=\"_blank\" rel=\"noopener\">another Nature paper<\/a> in April noted the power-efficient capabilities of the technology in pointing to such areas as <a href=\"https:\/\/www.nextplatform.com\/2021\/05\/18\/neuromorphic-computing-innovation-favors-the-edge\/\" target=\"_blank\" rel=\"noopener\">Internet of Things and edge processing<\/a>. AI is another area, given the electricity-hungry nature of the GPUs and the systems they power for training AI models. Neuromorphic systems may be able alleviate some of these demands.<\/p>\n<p>Flying The Efficiency Flag<\/p>\n<p>SpiNNcloud, the four-year-old company that in 2021 spun out of the Dresden University of Technology and whose chip architecture \u2013 SpiNNaker1 (the chip on the left below) \u2013 was designed by Steve Furber, the driving force behind the creation of the Arm microprocessor, is carrying that message of power efficiency and AI. On the landing page of its website, the company touts its <a href=\"https:\/\/spinncloud.com\/\" target=\"_blank\" rel=\"noopener\">SpiNNaker2<\/a> as the foundation of the \u201cultra energy-efficient infrastructure for new-generation AI inference,\u201d at 18 times more efficient than the GPUs that are powering many AI systems now.<\/p>\n<p>The upcoming successor, SpiNNext (on the right), will come in 78 times more efficient, according to the company.<\/p>\n<p>SpiNNcloud will now be able do to put the SpiNNaker2 architecture to the test. The German company launched the hybrid AI-level HPC platform, made it commercially available, and said that the Sandia National Laboratories \u2013 along with institutions like Technical University of M\u00fcnchen and Universit\u00e4t G\u00f6ttingen in Germany \u2013 was among its first customers.<\/p>\n<p>This week, company executives announced that Sandia has deployed SpiNNaker2, which simulates about 175 million neurons and is among the top five largest computing platforms based on how the human brain works.<\/p>\n<p><a href=\"http:\/\/www.nextplatform.com\/wp-content\/uploads\/2025\/06\/SpiNNcloud-Sandia-install.png\" rel=\"attachment wp-att-145936 noopener\" target=\"_blank\"><img fetchpriority=\"high\" decoding=\"async\" class=\"wp-image-145936 size-full\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/06\/SpiNNcloud-Sandia-install.png\" alt=\"\" width=\"1600\" height=\"1057\"  \/><\/a>(Photo Credit: Craig Fritz,\u00a0Sandia\u00a0National Labs)<\/p>\n<p>\u201cLast time was a generic announcement: We started to work with Sand\u00eda,\u201d Hector Gonzalez, co-founder and chief executive officer for SpiNNcloud, told The Next Platform. \u201cThey received the system, the supercomputer, and they\u2019re going to be working with it in a few applications.\u201d<\/p>\n<p>24 Boards With 48 Chips Each<\/p>\n<p>What the Sandia scientists stood up is a highly parallel architecture with 24 boards, each of which holds 48 SpiNNaker2 chips that are interconnected in toroidal topologies. Each microchip has 152 Arm-based low-power processing elements that are interconnected in a network-on-chip, Gonzalez said.<\/p>\n<p><a href=\"http:\/\/www.nextplatform.com\/wp-content\/uploads\/2025\/06\/SpiNNcloud-Spinnaker2.png\" rel=\"attachment wp-att-145937 noopener\" target=\"_blank\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-145937\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/06\/SpiNNcloud-Spinnaker2.png\" alt=\"\" width=\"1447\" height=\"813\"  \/><\/a><\/p>\n<p>\u201cThe microchips get grouped into boards of 48 chips, and then these boards get also interconnected between boards to boards,\u201d he said. \u201cWe have high-speed links that have been custom designed to expand the hierarchies of the boards. Then we interconnect the boards in a strategic way so that you build those toroidal large-scale networks. You fold them strategically so that you always ensure the shortest communication path. There is actually software that we use to find the right connectivity so the system starts to send packages and packets. Then there is a lead-based system that we identify for wiring the infrastructure. Once it\u2019s wired, you put them into rack-based systems. Then we built large-scale systems using this technology.\u201d<\/p>\n<p>He noted the power-efficiency advantage over GPUs as well as other benefits. \u201cSomething that you can do that you cannot do with a GPU is that you can have super-fine granular control of all these 175K cores. This is one of the distinguishing factors. The system is a globally asynchronous, locally synchronous, so the individual processes can be fully controlled and you can fully isolate paths within the processor. This is \u2026 very difficult to do in a GPU because essentially in a GPU, you have these streams of multi-processors [where it\u2019s] harder to isolate the paths.\u201d<\/p>\n<p>In addition, Gonzalez said the microchip is not what\u2019s typically found in a neuromorphic system because it doesn\u2019t commit to spiking neurons [which mimic how the brain processes information through electrical pulses]. With SpiNNaker2, <strong>\u201c<\/strong>you can pretty much implement [and] leverage these event-based characteristics from the neuromorphic domain, even in the mainstream DNN domain, even in mainstream deep neural networks. At the same time, it also lets you scale up neural symbolic models. Because it\u2019s fully programmable, you can actually scale up neural symbolic modes, like reasoners that have a symbolic layer, where at the same time you have neural layers.\u201d<\/p>\n<p>Sandia Labs is no stranger to neuromorphic computing. A year ago, it added to its arsenal with the <a href=\"https:\/\/www.nextplatform.com\/2024\/04\/24\/sandia-pushes-the-neuromorphic-ai-envelope-with-hala-point-supercomputer\/\" target=\"_blank\" rel=\"noopener\">Hala Point system powered by Intel\u2019s Loihi 2<\/a> neuromorphic processor, using the system to test AI workloads and how the performance compares with systems running on CPUs, GPUs, and other chips. The addition of SpiNNaker2 is part of the same ongoing initiative at Sandia to use such architectures to run energy-efficient AI applications that consume less power than traditional GPU-based systems, the company said.<\/p>\n<p>SpiNNcloud\u2019s Gonzalez outlined a number of applications for SpiNNaker2, including small multilayer perceptrons (MLPs) that are deployed at scale in every processor. The MLPs are small, so using a GPU would be overkill, he said, \u201cbut then you have many, many of them, and these MLPs are designed to find molecules. It\u2019s designed to have pattern matching between molecules in the drug discovery processes and also databases of profiles of patients. This is a strategy to do highly parallel drug discovery very efficiently.\u201d<\/p>\n<p>Others include QUBO\u2013based optimization or logistics problems that can address different types of complex mathematical simulations and challenges that involve random worker algorithms, which use randomness to explore solutions to various problems.<\/p>\n<p>\u201cYou just simulate this worker so you deploy this worker at scale and then you can do complex mathematical simulation leveraging the large-scale characteristics of the system,\u201d Gonzalez said.<\/p>\n<p>A Call For Sparsity<\/p>\n<p>SpiNNcloud will continue to create the architecture to support generative AI algorithms that can run machine learning workloads through dynamic sparsity, fueled by recent breakthroughs in machine learning that is moving the industry from dense modeling to extreme dynamic sparsity, where only a subset of neural pathways is activated based on the input into the system, which the company said help address the energy challenges in current AI computing.<\/p>\n<p>\u201cThere is very interesting directions where you get to find [and] granularly execute only parts of the network to retrieve the outputs,\u201d Gonzalez said. \u201cThis is what is known today as mixture of experts. People in this field have shown that the larger the number of experts you have, you can actually reduce the computational work. The sparsity is very large. You have a computational footprint that\u2019s very small. You get to reduce significantly the computational cost of these models. The problem is that standard hardware today is not designed for this fine granular isolation of paths, and this is where this type of very hybrid hardware that has characteristics from neuromorphic \u2013 so it has event-based communication \u2013 has a huge impact to offer into this mainstream AI domain. Essentially, you get two isolate paths, whereas the standard architectures like GPUs and all the GPU derivatives \u2013 Cerebras, SambaNova \u2013 they are optimized towards the regular 10 cases. They work better when you have fully utilized blocks.\u201d<\/p>\n<p><a href=\"https:\/\/go.theregister.com\/k\/hpe_solutions_AI\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/www.nextplatform.com\/wp-content\/uploads\/2023\/05\/HPE_button_19959_V2.png\" alt=\"\" class=\"wp-image-142439\"\/><\/a>Sign up to our Newsletter\t\t\t<\/p>\n<p>Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.<br \/><a class=\"button article-button\" title=\"Subscribe to Newsletter\" href=\"https:\/\/www.nextplatform.com\/register\/\" target=\"_blank\" rel=\"nofollow noopener\">Subscribe now<\/a><\/p>\n<p>Related Articles<\/p>\n","protected":false},"excerpt":{"rendered":"Some heavy hitters like Intel, IBM, and Google along with a growing number of smaller startups for the&hellip;\n","protected":false},"author":2,"featured_media":190436,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3164],"tags":[3284,20263,78021,7168,78022,53,16,15],"class_list":{"0":"post-190435","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-computing","8":"tag-computing","9":"tag-neuromorphic","10":"tag-sandia","11":"tag-snl","12":"tag-spinnaker","13":"tag-technology","14":"tag-uk","15":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114696141289625572","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/190435","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=190435"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/190435\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/190436"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=190435"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=190435"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=190435"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}