As generative AI adoption surges, the biggest bottleneck isn’t the model; it’s the chip. Today’s processors waste nearly all the energy they consume, converting electrical power into heat rather than useful computation.

Generative AI chatbots like ChatGPT consume roughly ten times more electricity per query than traditional Google searches, according to a Goldman Sachs report. By 2030, data centers could account for up to 12 percent of total US electricity usage, placing immense pressure on companies to optimize computational efficiency.

At the root of this inefficiency isn’t algorithm complexity, but a fundamental limitation of transistor technology dating back to the dawn of CMOS chips: nearly all digital computation energy becomes waste heat.

What if physics could point us toward a way out?

Vaire Computing, a US and UK-based startup, says it has finally cracked this decades-old engineering challenge. Its reversible computing chips aim to recover up to 50 percent of the wasted energy, potentially reshaping the semiconductor industry’s approach to energy efficiency.

The timing of these chips is critical as Moore’s Law approaches its physical limits, and the semiconductor industry faces the reality that traditional approaches to energy efficiency are hitting their limits.

Energy-efficient chips are crucial as Moore’s Law winds down. Credit: Getty Images

But can a small startup solve a problem that has eluded decades of semiconductor engineers? If reversible computing truly holds the answer, why hasn’t the industry embraced it before?

The energy crisis in CMOS

All modern digital devices rely on CMOS (complementary metal-oxide-semiconductor) chips, composed of millions of tiny transistors forming logical gates. Switching these transistors between 0 and 1 states requires energy input through applied voltage.

In conventional CMOS architectures, transistors switch abruptly. Excess electrical energy supplied during the switching process has nowhere to go except to dissipate as heat.

“This results in a sudden and lossy current surge, which essentially dissipates the entire bit energy as heat,” Michael Frank, Senior Scientist at Vaire Computing, told Interesting Engineering (IE).

Scaled across billions of transistors operating at gigahertz speeds, this inefficiency means most power input into processors emerges as waste heat, forcing expensive cooling solutions and imposing severe constraints on data center scalability.

Vaire’s alternative: Adiabatic switching

Vaire’s reversible chips employ adiabatic switching, which is a gradual, controlled transfer of charge, to capture and reuse energy traditionally lost during transistor switching.

“In Vaire’s adiabatic reversible CMOS, we transfer charges in a steady, gradual manner, dissipating only a small fraction of the bit energy with each transition while retaining the rest of the energy in an organized electrical form that can be reused,” Frank explained.

Unlike conventional processors, Vaire’s chips avoid information deletion, sidestepping Landauer’s principle, which states that information erasure inevitably generates heat. Instead, information is preserved and recycled through reversible operations.

In thermodynamics, “adiabatic” denotes processes that exchange minimal heat with the environment. Applied to computing, this means less waste and greater energy recycling for future operations.

How they’re doing it

Vaire’s current prototypes recover roughly 50 percent of computational energy, but according to Frank, this is just an initial milestone. The company expects significantly higher efficiency in upcoming designs.

The energy recovery process itself remains largely proprietary. However, Frank provided a simplified example to illustrate how the process works.

“The voltage in a logic gate or circuit resonates between 0 and 1, gated by a switch; then the voltage controlling that switch swings resonantly from 1 to 0, gated by another ‘reverse’ switch.”

“The net effect is that a 1 bit has been moved forward one stage in a datapath pipeline, with low energy dissipation,” he explained.

Think of it like a pendulum swinging back and forth. 

When one part of the circuit uses energy to switch from 0 to 1, that energy gets captured in the oscillating system and then swings back to power the next operation. The energy doesn’t disappear as heat—it gets recycled through the circuit.

However, scaling this elegant concept beyond small circuits to millions of synchronized logic gates introduces considerable engineering complexity.

View of a transistor with gold circuits and wire bonds on ceramic substrate. Siemens’ TF78/60 PNP germanium transistor. Credit: Mister rf/Wikimedia Commons.

Vaire addresses this by structuring chips into “resonant domains,” each comprising thousands or even millions of logic gates synchronized into phased groups.

“Each domain might include thousands or even millions of reversible logic gates, with the exact number depending on the technology and the frequency spec. The logic driven by one phase of the resonant oscillator has an aggregate capacitance, which forms a part of the resonant tank circuit,” Frank explained.

In other words, all the logic gates in a domain work together as a single electrical storage unit that becomes part of the energy recycling system.

This approach requires precise timing coordination across large sections of the chip. 

Unlike conventional processors that use a single clock signal, Vaire’s chips need multiple synchronized phases to capture and reuse energy at exactly the right moments. This timing coordination challenge becomes more complex as chip sizes grow.

A reality check

Reversible computing isn’t new. Early research in the 1990s at MIT, involving Frank himself, established proof-of-principle reversible chips. Yet commercialization never materialized, primarily due to economic realities.

The complexity of implementing reversible computing goes far beyond the principles of physics. According to Steven Brightfield, Chief Marketing Officer at BrainChip, the answer lies in historical economics.

“The ability to scale quickly and easily to each new process every year following Moore’s law provided consistent power savings of almost 40 percent and density increases that could be relied on,” he told IE.

Why invest heavily in reversible computing when Moore’s Law reliably offered simpler paths forward?

However, as Moore’s Law hits physical and economic constraints, revisiting radical ideas like reversible computing becomes viable—and potentially necessary.

The commercial roadmap

Today, that dynamic changes as traditional scaling becomes economically unfeasible for all but the largest companies. But Vaire still faces the fundamental challenge of scaling to large, complex chips.

While Vaire’s resonant domains might work for smaller circuits, coordinating precise timing across millions of gates presents exponential challenges.

Brightfield emphasizes that the semiconductor industry needs proof the technology works at scale.

“A single stand-alone chip, maybe as proposed by Vaire, would allow the industry to integrate it into a system and then evaluate if it can be integrated as IP into a larger SoC [system on a chip] since the issue that is still not clear even then is if it can scale into very large chips that would make it viable technology for a major player like Nvidia,” he explained.

Despite the technical challenges, Vaire is pushing ahead with ambitious commercial plans. Vaire anticipates market-ready reversible chips by 2028, contingent on continued funding and scaling success.

“Our current roadmap includes a demonstration of our existing prototype this year, work towards completion of a more advanced prototype in 2026, and another product-oriented demonstration chip in 2027. If all goes well, it’s possible that our chips could be included in major products by 2028,” Frank explained.

Vaire is focusing on applications where energy efficiency provides the biggest competitive advantage. Frank identified “any parallelizable application limited by an energy budget, or by constraints on power delivery and cooling” as prime targets—including much of today’s computing landscape.

Using massive parallelization to compensate for slower individual operations aligns well with AI workloads, which naturally benefit from parallel processing.

“Today’s high-performance chips are almost entirely limited by issues with hotspots and/or package-level thermal loads rather than by raw transistor count,” Frank noted.

“So if you can improve the raw power efficiency of the tech, you can also increase practical throughput significantly within typical product form factors and power constraints.”

If Vaire succeeds, reversible computing might redefine data center economics, unlock powerful edge AI devices, and extend battery life across billions of devices.

Yet the semiconductor industry’s history is filled with promising technologies that never scaled beyond the lab. Vaire’s challenge is not merely to demonstrate reversible computing in theory but to show it can scale reliably and economically.

With AI driving unprecedented computing demand, the semiconductor industry might finally be ready to rethink foundational chip architectures.

Whether Vaire can deliver on reversible computing’s promise remains to be seen—but engineers, chip designers, and businesses facing energy constraints will be watching closely.