You are reading this on a device that needs charging, updates, and a steady stream of electricity from the wall. Meanwhile, your brain is handling vision, memory, and decision-making on a power budget researchers often put at roughly 20 watts.

A March 15, 2023, analysis argues that this efficiency gap is not just a fun comparison. It is a practical clue for the future of artificial intelligence, especially as the energy footprint of computing becomes harder to ignore. What if the next leap comes from using less power, not more?

The 20-watt benchmark

In that analysis, research scientist Advait Madhavan wrote that the brain can deliver the equivalent of an exaflop, meaning about a quintillion math-like operations each second, while using only 20 watts.

He also compared that number to the power of a typical computer monitor, then contrasted it with the Frontier supercomputer at Oak Ridge National Laboratory, which can reach exascale performance but needs about 20 megawatts, roughly a million times more power.

The U.S. Department of Energy has said its exascale systems were designed to operate within about 20 megawatts per exaflop, which shows how tightly performance is tied to power. That kind of constraint is why researchers pay attention when biology does so much with so little.

Why the brain stays efficient

A U.S. National Science Foundation feature explains another piece of the puzzle. Even though the brain is only about 2 percent of body weight, it accounts for about 20 percent of the body’s energy use in adults, and most tasks add only a small extra cost on top of that steady baseline.

BrainFacts, a public education site run by the Society for Neuroscience, points to a key strategy. Most neurons stay relatively quiet for long stretches and only smaller groups ramp up when needed, like a city that turns on just the lights it needs. Less waste, same results.

AI is pushing electricity demand

This matters because AI does not live in the cloud in any magical sense. It runs in data centers filled with chips, cooling systems, and backup power, and at the end of the day someone has to pay the electric bill. It adds up.

The International Energy Agency estimates that data centers used around 415 terawatt-hours of electricity in 2024, a way of counting electricity use over time, and that was about 1.5 percent of global demand. It projects this could rise to about 945 terawatt-hours by 2030 in its base case, while warning that grid upgrades often move slower than new hardware.

A surge is already visible

In an update published April 16, 2026, the agency said electricity demand from data centers jumped 17 percent in 2025, and AI-focused facilities grew even faster. The same update says power per AI task is falling rapidly, but more usage and new features can still push total demand up, creating a “scramble for solutions” across grids and supply chains.

That tension helps explain why brain efficiency keeps showing up in AI discussions. Faster models are exciting, but the unglamorous question is whether the infrastructure can keep up without making energy and hardware costs explode.

Computing based on timing, not just switching

One hardware idea is to treat time itself as part of the code. In a short explainer, the lab describes “race logic,” where signals compete through a circuit and the winner’s timing carries information, instead of forcing every operation to march in lockstep.

Another related research thread is temporal computing, which looks for ways to generate and use time-based signals so circuits can solve certain problems more efficiently. In plain language, it is an attempt to get useful answers by letting the fastest path win, rather than burning power on every possible path.

Cutting connections on purpose

Brain-inspired efficiency can also show up at the software and model design level. A Topographical Sparse Mapping study in the journal Neurocomputing describes a way to wire an artificial neural network so each unit connects only to nearby or related units, instead of connecting almost everything to everything.

A University of Surrey press release about the same work says an enhanced version reached up to 99 percent sparsity, meaning most typical connections could be removed while still matching or exceeding standard accuracy tests.

It also quotes the team warning that training popular large models can consume over a million kilowatt-hours of electricity, the same unit you see on many home power bills, and “that simply isn’t sustainable” if AI keeps scaling the old way.

What comes next for brain-inspired AI

A September 3, 2021, article from Stanford University highlights why these ideas can be hard to translate. Liqun Luo uses a language metaphor to argue that AI researchers often skip over the “words and sentences” of brain circuitry and instead lean on brute force and lots of computation, which works but can be expensive.

For the most part, no single trick will turn today’s AI into a 20-watt system. But as electricity demand becomes a real-world constraint, more of the progress may come from smarter wiring, specialized circuits, and a willingness to do less work when less work is enough.

The main official analysis has been published by the National Institute of Standards and Technology on its Taking Measure blog.