For the last few years, GPUs (graphic processing units) have been the hottest sellers around. Hyperscalers, neoclouds, and everyone in between have been splashing billions to get their hands on the high-end chips to train and run their AI models.

That left the humble CPU (central processing unit), which powers virtually everything else in data centers, and thus the applications and services you use every day, out in the cold.

That’s starting to change, though. Earlier this month, Meta (META) and Nvidia (NVDA) announced an expanded deal that will see Nvidia provide the social media giant with the largest deployment of its Grace CPU-only servers to date.

Then just last week, AMD (AMD) announced its own deal with Meta, which includes servers running the company’s Venice and next-generation Verano CPUs.

And during Intel’s (INTC) Jan. 22 earnings call, CEO Lip-Bu Tan cited AI as a major driver for CPU demand.

“The continuing proliferation and diversification of AI workloads is placing significant capacity constraints on traditional and new hardware infrastructure, reinforcing the growing and essential role CPUs play in the AI era,” Tan said.

It might sound counterintuitive for CPUs to grab some of the spotlight amid the global AI buildout, but in a world where AI inference and agentic AI are becoming increasingly important, CPUs, it turns out, are primed to shine.

CPUs are the brains of virtually every computer on the planet. They’ve dominated hyperscalers’ data centers since Amazon (AMZN) launched its AWS in 2006, helping to ensure you’re able to order an Uber, share a Google Doc, and browse your favorite websites.

But ever since OpenAI (OPAI.PVT) debuted ChatGPT in late 2022, companies have pushed aside CPUs in favor of GPUs. That’s been a boon for Nvidia (NVDA), which already developed GPUs designed for use in data centers, making it the most valuable company in the world.

While GPU servers need CPUs to send data to and from the AI chips, CPU servers on their own fell out of fashion as companies poured more money into GPU servers.

Cabinets holding racks and active servers is seen at the Digital Realty Innovation Lab (DRIL) data center in Ashburn, Virginia on November 12, 2025. (Photo by ANDREW CABALLERO-REYNOLDS / AFP via Getty Images) Cabinets holding racks and active servers is seen at the Digital Realty Innovation Lab (DRIL) data center in Ashburn, Virginia on November 12, 2025. (ANDREW CABALLERO-REYNOLDS / AFP via Getty Images) · ANDREW CABALLERO-REYNOLDS via Getty Images

“We definitely saw … the build-out and the budget and the investment pivot toward … GPU infrastructure to train the AI models, but also to deploy them at scale,” explained Ian Buck, Nvidia VP of hyperscale and high-performance computing.

“As a result, the CPU market kind of stagnated.”

In 2023, Intel’s (INTC) Data Center and AI segment declined 5.2% year over year and was largely flat in 2024, before recovering slightly by 4.9% in 2025. It’s worth noting that Intel was also struggling amid its broader turnaround effort during those years.

In contrast, Nvidia’s Data Center revenue increased 41% in 2023, 217% in 2024, and 142% in 2025 as hyperscalers scooped up GPUs for AI processes.

But as companies lean further into running smaller AI models and agentic AI — semi- and fully autonomous bots that can perform tasks on your behalf — CPUs are getting more love.

“As customers are evolving and shifting more towards inference, they’re using smaller language models, they’re using more domain-specific models, and a lot of that runs more efficiently on CPUs,” explained Dan McNamara, senior vice president and GM of compute and enterprise AI at AMD.

Then there are AI agents. As the digital helpers pull up websites and navigate files, they’re driving increased CPU usage.

“Imagine I’ve got an agent who’s making me travel arrangements or something,” said Bernstein analyst Stacy Rasgon.

“So I tell the model I’m flying to San Francisco … on this date … that’s all running through the model. But now the agent’s got to go out and book it. So that’s not running on GPUs. That’s running on American Airlines servers on CPUs,” he said.

CPUs also play an integral part when it comes to mining data, personalization, and the analysis that provides context to a GPU and ultimately an AI model.

“All of that stuff has to be farmed from all the different databases. And often you may give a query that is maybe a few hundred words, but there’s upwards of literally thousands of words that actually make that query, that chat, that engagement, that agent intelligent. All that data management and wrangling is happening across [CPUs], across fleets.” Buck added.

According to BofA Global Research analyst Vivek Arya, chipmakers should see a boost to their revenue thanks to the increased use of CPUs in the AI ecosystem, with the total addressable market for CPUs climbing from $27 billion in 2025 to as much as $60 billion by the end of 2030.

AI servers, he wrote in a note to investors, will account for about 70% of that amount, while non-AI servers will make up 19%.

That doesn’t mean the age of the GPU is coming to an end, though. On the contrary, as GPU usage increases, CPU demand will increase alongside it.

“It’s not a zero-sum game. CPUs are growing, but GPUs are not slowing down, because there’s more and more workloads,” McNamara said.

Sign up for Yahoo Finance's Week in Tech newsletter. Sign up for Yahoo Finance’s Week in Tech newsletter. · yahoofinance

Email Daniel Howley at dhowley@yahoofinance.com. Follow him on Twitter at @DanielHowley.

Click here for the latest technology news that will impact the stock market

Read the latest financial and business news from Yahoo Finance