At the heart of what we call artificial intelligence are artificial neural networks. Their very name implies a biologically inspired innovation, designed to mimic the function of neural networks within the brain. And much like our brains, these networks operate in a three-dimensional space, building both wide and deep layers of artificial neurons.
The groundbreaking nature of this brain-inspired mimicry can’t be overstated. Two of the leading minds behind this breakthrough—physicists John J. Hopfield and Geoffrey E. Hinton—even bagged the Nobel Prize last year. While the AI models inspired by their work have reimagined technology, these brain-like systems, especially in the age of AI, could also unlock longtime mysteries of our own brains.
But now, artificial intelligence systems could truly become more like human brains, and some researchers believe they know how: AI design must be elevated to another dimension—literally. While current models have certain parameters of width and depth already, AI can be refashioned to have an additional, structured complexity researchers are calling the “height” dimension. Two scientists at the Rensselaer Polytechnic Institute in New York and the City University of Hong Kong explain this brain-inspired idea in a new study published in the journal Patterns this past April.
AI models already have width, the number of nodes in a layer. Each node is a fundamental processing unit that takes inputs and creates outputs within the larger network, like the neurons in your brain working together. There’s also depth, the number of layers in a network. Here’s how Rensselaer Polytechnic Institute’s Ge Wang, Ph.D., a co-author of the study, explains it: “Imagine a city: width is the number of buildings on a street, depth is how many streets you go through, and height is how tall each building is. Any room in any building can communicate with other rooms in the city,” he says in an email. When you add height, as internal wiring or shortcuts within a layer, it creates “richer interactions among neurons without increasing depth or width. It reflects how local neural circuits in the brain … interact to improve information processing.”
Wang and his co-author Feng-Lei Fan, Ph.D., achieve this increased dimensionality within neural networks by introducing intra-layer links and feedback loops. Intra-layer links, Wang says, resemble the lateral connections found in the cortical column of the brain, which is linked with higher-level cognitive functions. These links form connections among neurons in the same layer. Feedback loops, meanwhile, are similar to recurrent signaling, which is where outputs impact inputs. Wang says this could improve a system’s memory, perception, and cognition.
“Together, they help networks evolve over time and settle into stable, meaningful patterns, like how your brain can recognize a face even from a blurry image,” Wang says. “These structures enrich AI’s ability to refine decisions over time, just like the brain’s iterative reasoning.”
AI’ most compelling leap into humanlike behavior may be in the birth of large language models like ChatGPT. On June 12, 2017, eight researchers from Google wrote the paper “All You Need Is Attention,” introducing a new deep-learning architecture known as “transformer,” a technique that focused on attention mechanisms that drastically decreased training times for A.I. models. This paper largely fueled the subsequent AI boom—and is why AI suddenly seemed to be everywhere all at once.
However, it didn’t materialize the “holy grail” of most AI research: artificial general intelligence (AGI). This is the moment when AI systems adapt to truly “think” like a human being. Some futurists consider AGI to be a key pathway to reaching the hypothetical singularity, the point when AI surpasses human cognitive powers.
The reason transformer architecture didn’t fulfill AGI requirements, according to study co-author, Wang, is that it has some inherent limitations. Reuters reported in 2024 that AI companies were no longer seeing the previously observed exponential growth in AI capability. AI’s “scaling law” —the idea that larger data sets and more resources would make models better and better—was no longer working. This prompted some experts to believe that new innovations were needed to push these models even further. This new study suggests that integrating intra-layer links and feedback loops could enable models to relate, reflect, and refine outputs, making them both smarter, as well as more energy efficient.
“One misconception is that our perspective merely proposes ‘more complexity,’ Wang says. “Our goal is structured complexity, adding dimensions and dynamics that reflect how intelligence arises in nature and works logically, not just stacking more layers or parameters.”
One fascinating example of how the authors propose adding this complexity is in feedback loops, with “phase transitions” that would foster intelligent behaviors. The core idea is that these transitions can create new behaviors that were not originally present in the system’s individual components. Just as ice melts into water, so too could AI systems “evolve” to stable states.
“For AI, this could mean a system shifting from vague or uncertain outputs to confident, coherent ones as it gathers more context or feedback,” Wang says. “These transitions can mark a point where the AI truly ‘understands’ a task or pattern, much like human intuition kicking in.”
Feedback loops and intra-layer links that mimic ways the human mind arrives at insight and meaning isn’t enough to instantly herald the era of AGI. But, as Wang mentions in his study, these techniques could help take AI models a step beyond transformer architecture, a crucial step if we ever hope to reach that “holy grail” of AI research.
➡️ More on What AI Could Do
Introducing more brain-like features into AI architecture could not only make AI more transparent (i.e., we know how it arrived at certain conclusions), but it could also help us investigate the mysterious and daunting puzzle of how our own human minds work. The authors note that this application could provide a model for scientists to explore human cognition while also investigating neurological disorders, such as Alzheimer’s and epilepsy.
Although taking inspiration from human biology provides benefits, Wang sees the future of AI as one where neuromorphic architectures—those like the human brain—co-exist with other systems. Potentially even quantum systems, which govern the behavior of subatomic particles and we are still struggling to understand fully.
“Brain-inspired AI offers elegant solutions to complex problems, especially in perception and adaptability. Meanwhile, AI will also explore strategies unique to digital, analog or even quantum systems,” Wang says. “The sweet spot likely lies in hybrid designs, borrowing from nature and our imagination beyond nature.”
Darren lives in Portland, has a cat, and writes/edits about sci-fi and how our world works. You can find his previous stuff at Gizmodo and Paste if you look hard enough.