{"id":593171,"date":"2025-11-25T16:40:24","date_gmt":"2025-11-25T16:40:24","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/593171\/"},"modified":"2025-11-25T16:40:24","modified_gmt":"2025-11-25T16:40:24","slug":"this-ai-combo-could-unlock-human-level-intelligence","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/593171\/","title":{"rendered":"This AI combo could unlock human-level intelligence"},"content":{"rendered":"\n<p>Will computers ever match or surpass <a href=\"https:\/\/www.nature.com\/articles\/d41586-024-03905-1\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-024-03905-1\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">human-level intelligence<\/a> \u2014 and, if so, how? When the Association for the Advancement of Artificial Intelligence (AAAI), based in Washington DC, asked its members earlier this year whether <a href=\"https:\/\/www.nature.com\/articles\/d41586-024-01314-y\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-024-01314-y\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">neural networks<\/a> \u2014 the current star of artificial-intelligence systems \u2014 alone will be enough to hit this goal, the <a href=\"https:\/\/www.nature.com\/articles\/d41586-025-00649-4\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-025-00649-4\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">vast majority said no<\/a>. Instead, most said, a heavy dose of an older kind of AI will be needed to get these systems up to par: symbolic AI.<\/p>\n<p><a href=\"https:\/\/www.nature.com\/articles\/d41586-025-03223-0\" class=\"u-link-inherit\" data-track=\"click\" data-track-label=\"recommended article\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" class=\"recommended__image\" alt=\"\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/11\/d41586-025-03856-1_51539516.png\"\/><\/p>\n<p class=\"recommended__title u-serif\">Will AI ever win its own Nobel? Some predict a prize-worthy science discovery soon<\/p>\n<p><\/a><\/p>\n<p>Sometimes called \u2018good old-fashioned AI\u2019, symbolic AI is based on formal rules and an encoding of the logical relationships between concepts<a href=\"#ref-CR1\" data-track=\"click\" data-action=\"anchor-link\" data-track-label=\"go to reference\" data-track-category=\"references\">1<\/a>. Mathematics is symbolic, for example, as are \u2018if\u2013then\u2019 statements and computer coding languages such as Python, along with flow charts or Venn diagrams that map how, say, cats, mammals and animals are conceptually related. Decades ago, symbolic systems were an early front-runner in the AI effort. However, in the early 2010s, they were vastly outpaced by more-flexible neural networks. These machine-learning models <a href=\"https:\/\/www.nature.com\/articles\/d41586-024-02756-0\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-024-02756-0\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">excel at learning<\/a> from vast amounts of data, and underlie <a href=\"https:\/\/www.nature.com\/articles\/d41586-025-03386-w\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-025-03386-w\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">large language models (LLMs)<\/a>, as well as chatbots such as ChatGPT.<\/p>\n<p>Now, however, the computer-science community is pushing hard for a better and bolder melding of the old and the new. \u2018Neurosymbolic AI\u2019 has become the hottest buzzword in town. Brandon Colelough, a computer scientist at the University of Maryland in College Park, has charted the meteoric rise of the concept in academic papers (see \u2018Going up and up\u2019). These reveal a spike of interest in neurosymbolic AI that started in around 2021 and shows no sign of slowing down<a href=\"#ref-CR2\" data-track=\"click\" data-action=\"anchor-link\" data-track-label=\"go to reference\" data-track-category=\"references\">2<\/a>.<\/p>\n<p>Plenty of researchers are heralding the trend as an escape from what they see as an unhealthy monopoly of neural networks in AI research, and expect the shift to deliver smarter and more reliable AI.<\/p>\n<p>A better melding of these two strategies could lead to <a href=\"https:\/\/www.nature.com\/articles\/d41586-024-03905-1\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-024-03905-1\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">artificial general intelligence (AGI)<\/a>: AI that can reason and generalize its knowledge from one situation to another as well as humans do. It might also be useful for high-risk applications, such as <a href=\"https:\/\/www.nature.com\/articles\/d41586-025-02260-z\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-025-02260-z\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">military<\/a> or <a href=\"https:\/\/www.nature.com\/articles\/d41586-025-02362-8\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-025-02362-8\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">medical<\/a> decision-making, says Colelough. Because symbolic AI is transparent and understandable to humans, he says, it doesn\u2019t suffer from the <a href=\"https:\/\/www.nature.com\/articles\/d41586-023-02366-2\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-023-02366-2\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">\u2018black box\u2019 syndrome<\/a> that can make neural networks hard to trust.<\/p>\n<p><img decoding=\"async\" class=\"figure__image\" alt=\"GOING UP AND UP: bar chart showing the increase in published papers on neurosymbolic AI between 1995 to 2024. The past five years show an increase from fewer than 10 to more than 200 per year.\" loading=\"lazy\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/11\/d41586-025-03856-1_51751370.jpg\"\/><\/p>\n<p class=\"figure__caption u-sans-serif\">Source: updated from ref. 2<\/p>\n<p>There are already good examples of neurosymbolic AI, including Google DeepMind\u2019s AlphaGeometry, a system reported last year<a href=\"#ref-CR3\" data-track=\"click\" data-action=\"anchor-link\" data-track-label=\"go to reference\" data-track-category=\"references\">3<\/a> that can <a href=\"https:\/\/www.nature.com\/articles\/d41586-025-02343-x\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-025-02343-x\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">reliably solve maths Olympiad problems<\/a> \u2014 questions aimed at talented secondary-school students. But working out how best to combine neural networks and symbolic AI into an all-purpose system is a formidable challenge.<\/p>\n<p>\u201cYou\u2019re really architecting this kind of two-headed beast,\u201d says computer scientist William Regli, also at the University of Maryland.<\/p>\n<p>War of words<\/p>\n<p>In 2019, computer scientist Richard Sutton posted a short essay entitled \u2018The bitter lesson\u2019 on his blog (see <a href=\"http:\/\/www.incompleteideas.net\/IncIdeas\/BitterLesson.html\" data-track=\"click\" data-label=\"http:\/\/www.incompleteideas.net\/IncIdeas\/BitterLesson.html\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">go.nature.com\/4paxykf<\/a>). In it, he argued that, since the 1950s, people have repeatedly assumed that the best way to make intelligent computers is to feed them with all the insights that humans have arrived at about the rules of the world, in fields from physics to social behaviour. The bitter pill to swallow, wrote Sutton, is that time and time again, symbolic methods have been outdone by systems that use a ton of raw data and scaled-up computational power to leverage \u2018search and learning\u2019. Early chess-playing computers, for example, that were trained on human-devised strategies were outperformed by those that were simply fed lots of game data.<\/p>\n<p>This lesson has been widely quoted by proponents of neural networks to support the idea that making these systems <a href=\"https:\/\/www.nature.com\/articles\/d41586-023-00641-w\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-023-00641-w\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">ever-bigger is the best path to AGI<\/a>. But many researchers argue that the essay overstates its case and downplays the crucial part that symbolic systems can and do play in AI. For example, the best chess program today, Stockfish, pairs a neural network with a symbolic tree of allowable moves.<\/p>\n<p><a href=\"https:\/\/www.nature.com\/articles\/d41586-025-03222-1\" class=\"u-link-inherit\" data-track=\"click\" data-track-label=\"recommended article\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" class=\"recommended__image\" alt=\"\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/11\/d41586-025-03856-1_51740964.jpg\"\/><\/p>\n<p class=\"recommended__title u-serif\">AI models that lie, cheat and plot murder: how dangerous are LLMs really?<\/p>\n<p><\/a><\/p>\n<p>Neural nets and symbolic algorithms both have pros and cons. Neural networks are made up of layers of nodes with weighted connections that are adjusted during training to recognize patterns and learn from data. They are fast and <a href=\"https:\/\/www.nature.com\/articles\/d41586-025-03570-y\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-025-03570-y\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">creative<\/a>, but they are also bound to <a href=\"https:\/\/www.nature.com\/articles\/d41586-025-00068-5\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-025-00068-5\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">make things up<\/a> and can\u2019t reliably answer questions beyond the scope of their training data.<\/p>\n<p>Symbolic systems, meanwhile, struggle to encompass \u2018messy\u2019 concepts, such as <a href=\"https:\/\/www.nature.com\/articles\/d41586-023-03272-3\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-023-03272-3\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">human language<\/a>, that involve vast rule databases that are difficult to build and slow to search. But their workings are clear, and they are good at reasoning, using logic to apply their general knowledge to fresh situations.<\/p>\n<p>When put to use in the real world, neural networks that lack symbolic knowledge make classic mistakes: image generators might draw people with six fingers on each hand because they haven\u2019t learnt the general concept that hands typically have five; video generators struggle to make a ball bounce around a scene because they haven\u2019t learnt that gravity pulls things downwards. Some researchers blame such mistakes on a lack of data or computing power, but others say that the mistakes illustrate neural networks\u2019 fundamental inability to generalize knowledge and reason logically.<\/p>\n<p>Many argue that adding symbolism to neural nets might be the best \u2014 even the only \u2014 way to inject logical reasoning into AI. The global technology firm IBM, for example, is backing neurosymbolic techniques as a path to AGI. But others remain sceptical: Yann LeCun, one of the <a href=\"https:\/\/www.nature.com\/articles\/d41586-024-01626-z\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-024-01626-z\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">fathers of modern AI<\/a> and chief AI scientist at tech giant Meta, has said that neurosymbolic approaches are \u201cincompatible\u201d with neural-network learning.<\/p>\n<p><a href=\"https:\/\/www.nature.com\/articles\/d41586-025-03570-y\" class=\"u-link-inherit\" data-track=\"click\" data-track-label=\"recommended article\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" class=\"recommended__image\" alt=\"\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/11\/d41586-025-03856-1_51740962.jpg\"\/><\/p>\n<p class=\"recommended__title u-serif\">Can AI be truly creative?<\/p>\n<p><\/a><\/p>\n<p>Sutton, who is at the University of Alberta in Edmonton, Canada, and won the 2024 Turing prize, the equivalent of the Nobel prize for computer science, holds firm to his original argument: \u201cThe bitter lesson still applies to today\u2019s AI,\u201d he told Nature. This suggests, he says, that \u201cadding a symbolic, more manually crafted element is probably a mistake\u201d.<\/p>\n<p>Gary Marcus, an AI entrepreneur, writer and cognitive scientist based in Vancouver, Canada, and one of the most vocal advocates of neurosymbolic AI, tends to frame this difference of opinions as a philosophical battle that is now being settled in his favour.<\/p>\n<p>Others, such as roboticist Leslie Kaelbling at the Massachusetts Institute of Technology (MIT) in Cambridge, say that arguments over which view is right are a distraction, and that people should just get on with whatever works. \u201cI\u2019m a magpie. I\u2019ll do anything that makes my robots better.\u201d<\/p>\n<p>Mix and match<\/p>\n<p>Beyond the fact that neurosymbolic AI aims to meld the benefits of neural nets with the benefits of symbolism, its definition is blurry. Neurosymbolic AI encompasses \u201ca very large universe\u201d, says Marcus, \u201cof which we\u2019ve explored only a tiny bit\u201d.<\/p>\n<p>There are many broad approaches, which people have attempted to categorize in various ways. One option highlighted by many is the use of symbolic techniques to improve neural nets<a href=\"#ref-CR4\" data-track=\"click\" data-action=\"anchor-link\" data-track-label=\"go to reference\" data-track-category=\"references\">4<\/a>. AlphaGeometry is arguably <a href=\"https:\/\/www.nature.com\/articles\/d41586-024-00141-5\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/d41586-024-00141-5\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">one of the most sophisticated examples<\/a> of this strategy: it trains a neural net on a synthetic data set of maths problems produced using a symbolic computer language, making the solutions easier to check and ensuring fewer mistakes. It combines the two elegantly, says Colelough. In another example, \u2018logic tensor networks\u2019 provide a way to encode symbolic logic for neural networks. Statements can be assigned a fuzzy-truth value<a href=\"#ref-CR5\" data-track=\"click\" data-action=\"anchor-link\" data-track-label=\"go to reference\" data-track-category=\"references\">5<\/a>: a number somewhere between 1 (true) and 0 (false). This provides a framework of rules to help the system reason.<\/p>\n<p>Another broad approach does what some would say is the reverse, using neural nets to finesse symbolic algorithms. One problem with symbolic knowledge databases is that they are often so large that they take a very long time to search: the \u2018tree\u2019 of all possible moves in a game of Go, for example, contains about 10170 positions, which is unfeasibly large to crunch through. Neural networks can be trained to predict the most promising subset of moves, allowing the system to cut down how much of the \u2018tree\u2019 it has to search, and thus speeding up the time it takes to settle on the best move. That\u2019s what Google\u2019s AlphaGo did when it <a href=\"https:\/\/www.nature.com\/articles\/531284a\" data-track=\"click\" data-label=\"https:\/\/www.nature.com\/articles\/531284a\" data-track-category=\"body text link\" target=\"_blank\" rel=\"noopener\">famously outperformed a Go grandmaster<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"Will computers ever match or surpass human-level intelligence \u2014 and, if so, how? When the Association for the&hellip;\n","protected":false},"author":2,"featured_media":593172,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8],"tags":[8668,2348,3965,3690,7462,3966,70,53,16,15],"class_list":{"0":"post-593171","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-science","8":"tag-computer-science","9":"tag-history","10":"tag-humanities-and-social-sciences","11":"tag-machine-learning","12":"tag-mathematics-and-computing","13":"tag-multidisciplinary","14":"tag-science","15":"tag-technology","16":"tag-uk","17":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/115611332736055502","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/593171","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=593171"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/593171\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/593172"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=593171"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=593171"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=593171"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}