Introduction

Over more than three decades, Arm evolved into a leading CPU architecture supplier for smartphones and embedded devices, licensing CPU architectures and core designs to customers such as Apple (AAPL), Qualcomm (QCOM), Nvidia (NVDA), Samsung, and MediaTek instead of manufacturing chips itself. In 2025, that business model began changing with the introduction of Arm’s AGI CPU program, under which the company started offering direct CPU products targeting hyperscale AI infrastructure and manufactured by undisclosed foundry partners. ARM remains fabless nevertheless.

ARM has long been associated with smartphones and low-power processors, but the company’s role in artificial intelligence is rapidly expanding far beyond mobile devices. While investors continue to focus primarily on GPUs and AI accelerators, Arm is increasingly positioning itself as the CPU architecture layer underneath the entire AI ecosystem, spanning hyperscale cloud infrastructure, edge AI devices, PCs, and physical AI systems.

In the past, I’ve written several articles centered on ARM artificial intelligence infrastructure, including Nvidia, Broadcom (AVGO), Qualcomm, and Advanced Micro Devices (AMD). In a July 2024 Seeking Alpha article entitled “Arm Holdings: A Complementary Investment To Nvidia,” I discussed how Arm’s Neoverse platform was steadily increasing its presence in AI data centers through relationships with hyperscalers such as Amazon AWS, Google Cloud, Microsoft Azure, and Nvidia Grace-based systems.

Since its IPO, ARM has increasingly positioned itself not simply as an architecture supplier for smartphones and embedded devices, but as a central CPU platform for AI infrastructure.

According to CEO Rene Haas during the company’s May 6, 2026 earnings call:

The analyst who called NVIDIA in 2010 just named his top 10 stocks and Arm wasn’t one of them. Get them here FREE.

“As AI is moving from human-based queries to continuous agent-driven workloads, this shift is expanding the role of the CPU. These Agentic workloads require CPUs to coordinate tasks, move data, manage memory, enforce security and orchestrate workloads around accelerators.”

 Importantly, Nvidia remains the dominant GPU supplier, yet even Nvidia is increasingly relying on Arm-based CPUs underneath its AI systems.

Nvidia’s Vera CPU is based on Arm v9.2 architecture and is designed to work alongside Rubin GPUs in next-generation AI infrastructure systems. Although Nvidia designs the CPU internally, it relies on Arm’s instruction-set architecture and is tightly integrated with Rubin GPUs for orchestration, memory management, networking, scheduling, and accelerator coordination.

Arm’s AI Total Addressable Market Forecast FY2026–FY2031

According to Chart 1, Arm estimates that its total AI-related addressable market will expand from approximately $535 billion in FY2026 to more than $1.5 trillion by FY2031. The largest expansion occurs within Cloud AI, where the XPU opportunity alone is projected to exceed $1 trillion by FY2031.

The chart also demonstrates that Arm’s opportunity extends across three major AI categories: Cloud AI, Edge AI, and Physical AI. While Edge AI remains an important revenue contributor, the long-term expansion of Cloud AI infrastructure is becoming the dominant growth driver for the company.

The most important aspect of the chart is the projected expansion of data-center CPU opportunity from approximately $50 billion to more than $100 billion by FY2031. This supports Arm’s argument that CPUs remain essential in AI systems despite the rapid growth of accelerators and GPUs.

Chart 1: Arm AI TAM Forecast FY2026–FY2031

Server CPU TAM and Market Share Forecast

According to Table 1, the AI data-center buildout is not eliminating CPUs but increasing their strategic importance. The total server CPU market is projected to expand from $27.7 billion in 2025 to $56.2 billion in 2028, while Arm’s server CPU market share rises from 13.4% to 23.1%.

The gains come primarily at the expense of Intel (INTC), whose market share declines from 52.0% to 43.9% over the same period. AMD’s share remains comparatively stable, while Arm gains share through the continued adoption of custom Arm-based CPUs by hyperscalers and accelerator vendors.

The shift reflects growing deployments of AWS Graviton, Google Axion, Microsoft Cobalt, Nvidia Grace and Vera, TPU host CPUs, and Trainium infrastructure. Increasingly, AI infrastructure is being built around heterogeneous systems where CPUs and accelerators work together rather than independently.

Why AI Increases CPU Demand Rather Than Replacing CPUs

One of the most important themes emerging from Arm’s recent earnings call is that AI agents increase CPU demand rather than replace CPUs.

Historically, much of the market narrative implied that GPUs would dominate AI infrastructure while CPUs became secondary components. Arm is now arguing the opposite. As AI systems become increasingly agentic and autonomous, orchestration complexity rises dramatically. CPUs are responsible for task scheduling, memory management, networking, security, data movement, and coordination among accelerators.

During the conference call, CEO Haas repeatedly emphasized that at the datacenter, CPU core counts could ultimately become more important than chip counts themselves. Arm’s AGI CPUs feature 136 cores, while Nvidia’s Vera CPU contains 88 cores. Haas indicated that future generations could potentially double or quadruple those counts over time.

This thesis also aligns with broader hyperscaler trends. Google’s newest TPU systems now integrate Arm-based Axion CPUs. Nvidia’s Vera platform combines Arm CPUs with Rubin accelerators. AWS continues expanding Graviton alongside Trainium and Inferentia deployments. Microsoft is aggressively scaling Arm-based Cobalt systems throughout Azure.

The Shift Toward Heterogeneous AI Infrastructure

Arm’s growing importance reflects the broader transition toward heterogeneous computing infrastructure.

In earlier generations of AI systems, accelerators operated primarily alongside traditional x86 CPUs. Today, the architecture is evolving toward tightly integrated CPU-accelerator systems optimized around efficiency, memory bandwidth, networking, and orchestration.

This trend benefits Arm because its instruction set is optimized for high core-count efficiency and lower power consumption. According to management commentary, Google’s new TPU systems using Arm Axion CPUs improved overall platform performance by approximately 80% while reducing power consumption by roughly 50% compared with previous x86-based designs.

Management disclosed during the FY2026 Q4 earnings call that customer demand for AGI CPUs exceeded $2 billion across FY2027–FY2028 shortly after launch, although the company maintained a more conservative near-term revenue outlook of approximately $1 billion because of supply constraints involving wafers, packaging, memory, and testing capacity.

The market reacted negatively to the more conservative near-term revenue outlook, sending ARM shares down roughly 10% following the earnings call.

PC CPU Market Share and Revenue Share Forecast

According to Table 2, Arm is also steadily gaining share in AI PCs and edge devices. Arm’s PC CPU unit share is projected to rise from 13.6% in 2025 to 18.6% in 2028, while revenue share increases from 11.7% to 15.5%.

Intel’s dominance continues to erode in both units and revenue, while AMD gains more gradually. The market transition reflects increasing adoption of Arm-based AI PCs built around Qualcomm Snapdragon X processors, Apple’s M-series processors, and Microsoft’s Copilot+ ecosystem.

The expansion of local inference workloads further benefits Arm because on-device AI processing emphasizes power efficiency and integrated neural processing rather than maximum raw compute alone.

Arm’s Position In Edge AI And AI PCs

Edge AI remains an important long-term opportunity for Arm even as Cloud AI becomes the dominant revenue driver.

Arm architecture already powers the overwhelming majority of smartphones globally, while Apple (AAPL), Samsung, Qualcomm, and MediaTek continue building increasingly advanced AI functionality around Arm-based designs.

In a November 16, 2025 Substack article entitled “Arm at the Edge – Apple’s AI Paradox,” I discussed how Apple’s relatively conservative AI implementation demonstrated the difference between architecture and implementation. While Apple’s processors rely on Arm architecture, competing Android platforms increasingly emphasize larger neural processing blocks and local generative AI workloads.

Importantly, Arm benefits regardless of which OEM ultimately delivers superior AI functionality because both high-volume and high-performance devices continue paying royalties back to Arm’s architecture ecosystem.

Physical AI Expands Arm’s Long-Term Opportunity

One of the least appreciated components of Arm’s opportunity may be Physical AI.

According to the company’s projections, Physical AI TAM could expand from approximately $25 billion in FY2026 to roughly $50 billion by FY2031. This category includes robotics, autonomous systems, industrial automation, automotive AI, sensors, drones, and humanoid robotics.

Unlike cloud AI systems, physical AI devices require extremely efficient local processing, low latency, low power consumption, and scalable edge inference. These are areas where Arm architecture has historically maintained strong advantages.

As AI increasingly migrates from cloud-only deployments toward distributed intelligent systems, Arm’s architectural footprint across embedded devices may become increasingly valuable.

Investor Takeaway

Arm’s investment thesis has evolved significantly beyond smartphones and low-power mobile processors.

The company is increasingly becoming the CPU architecture layer underneath hyperscale AI infrastructure itself. AI agents, heterogeneous systems, and accelerator orchestration are increasing CPU importance rather than reducing it. This trend directly benefits Arm through deployments across AWS Graviton, Google Axion, Nvidia Grace and Vera, Microsoft Cobalt, TPU host CPUs, and AI edge devices.

Chart 1 above demonstrates that Arm’s addressable AI opportunity could expand from roughly $535 billion to more than $1.5 trillion by FY2031. Meanwhile, Table 1 above shows Arm steadily gaining server CPU share as hyperscalers continue migrating away from traditional x86 infrastructure toward custom Arm-based designs.

The market continues focusing overwhelmingly on GPUs and accelerators. However, CPUs remain essential components of AI infrastructure, particularly as AI systems become increasingly agentic and orchestration-intensive. Arm’s architecture is positioning itself directly in the middle of that transition.

While valuation remains elevated and royalty monetization still lags licensing momentum, Arm increasingly appears less like a smartphone IP company and more like a foundational AI infrastructure platform.

The analyst who called NVIDIA in 2010 just named his top 10 AI stocks

This analyst’s 2025 picks are up 106% on average. He just named his top 10 stocks to buy in 2026. Get them here FREE.