
Sam Altman, chief executive officer of OpenAI Inc., during a media tour of the Stargate AI data center in Abilene, Texas – Sept. 23, 2025. Photographer: Kyle Grillot/Bloomberg
© 2025 Bloomberg Finance LP
A company with $20 billion in annual revenue can’t commit $1.4 trillion to capital infrastructure. It’s simple math: by the end of 2026, market pressure is likely to push OpenAI toward scaling back its record-setting AI data center buildout.
OpenAI CEO Sam Altman is “looking at commitments of about $1.4 trillion over the next 8 years,” with flagship projects like the $500 billion Stargate network of AI data centers. The scale is staggering and unprecedented. The company is effectively running a race between adoption and obsolescence, betting it can generate returns before the technology depreciates and competitors catch up.
A market correction is approaching. Here’s why OpenAI’s infrastructure ambitions will slow down in 2026, and what it means for the broader AI boom.
The Hype: Bets Too Big To Succeed
Nvidia has announced it will invest up to $100 billion in OpenAI and provide advanced processors for these facilities. Nvidia’s CEO Jensen Huang called this “the largest computing project in history,” while Altman told CNBC they “did not quite set our sights big enough given the market demand.” These staggering commitments are being made at a pace that tech history has rarely, if ever, seen.
The confidence is striking given OpenAI’s fundamentals. The company is exiting 2025 with roughly $20 billion in annual revenue, an incredible amount for a startup, but nowhere near the scale needed to justify $1.4 trillion in infrastructure commitments. We’ve seen this before in telecom overbuild cycles; the math doesn’t work.
The Financing Stack: How OpenAI Is Financing the Boom
How can a startup like OpenAI marshal such resources? The answer is a web of interconnected deals spreading the cost and risk across the industry. OpenAI is effectively leveraging other companies’ balance sheets to finance its ambitions.
Consider the players: SoftBank and Oracle have joined OpenAI in the Stargate venture, planning to build new data centers together. Cloud providers across the hyperscale market are investing heavily to host OpenAI’s models, betting that future profits will justify today’s outlays.
Specialized neo-cloud firms have also entered the fray. CoreWeave, a cloud startup backed by Nvidia, inked a multi-year deal to provide OpenAI compute power while raising $2.6 billion in the private debt market. They are using the loan to buy the expensive Nvidia chips that OpenAI needs, with the hardware serving as collateral. Crusoe Energy is similarly raising capital to construct massive server farms for OpenAI’s workloads. In Texas, Crusoe helped build OpenAI’s first gigawatt data center.
Meanwhile, Nvidia isn’t just selling chips; it took equity stakes in CoreWeave and Crusoe and even agreed to buy excess capacity if CoreWeave is not able to rent it to other clients. The arrangement boosts sales to the startups, which, in turn, spend on buying Nvidia’s GPUs. They are raising billions through off-balance-sheet special purpose vehicles (SPVs) to finance the purchases, creating a self-funded revenue cycle that blurs how much demand is genuinely organic versus financially engineered. The structure is complex and opaque.
The common thread is that several industry players are tying their fortunes to OpenAI’s success. This spreads out the upfront cost, but it also concentrates risk: if OpenAI stumbles, the domino effect could hit cloud upstarts, chipmakers, real estate trusts and investors. OpenAI’s capital web now operates as a shared-risk financing ecosystem anchored to a single assumption: that demand will rise fast enough to fill all this new capacity.
The Demand Problem: Will Revenue Arrive in Time?
All these investments assume demand for AI services will materialize fast enough to justify the build-out. Much of today’s AI usage remains in experimental or free phases, still struggling to translate into repeatable enterprise revenue. Converting millions of free users into paying customers remains one of the industry’s central challenges.
Enterprises, the most lucrative customers, are cautiously piloting AI rather than buying at scale, worried about accuracy and security. They won’t ramp up spending until they see proven ROI.
One strong source of AI spending is the government sector. Defense and intelligence agencies are pouring money into AI capabilities, creating a fast-growing market for firms like Palantir. Their $10 billion, 10-year Army contract is an example of durable AI demand.
Meanwhile, the technology moves faster than concrete and steel. AI hardware has a brutal shelf life: expensive GPUs become obsolete within 4-6 years, eclipsed by newer chips. An expensive AI cluster deployed today might be near-valueless by 2030 if refresh cycles accelerate as expected, meaning that by the time demand truly booms, today’s infrastructure could already require an upgrade.
OpenAI is betting it can achieve exceptional growth in AI usage to outrun the depreciation clock. They have a short window to generate returns before a new spending cycle is due. If the current boom overbuilds capacity, we could see a glut of underused servers and a collapse in cloud pricing, an echo of the early-2000s telecom crash when overcapacity drove many ambitious firms to ruin.
The Competition Problem: OpenAI Is Losing Ground
Amid its breakneck expansion, OpenAI may be overextended, juggling so many projects that it risks losing ground in its core business. In addition to massive infrastructure plans, the company has ventured into hardware devices (in partnership with famed designer Jony Ive) and hinted at advertising-based business models. The company is stretching beyond its original focus on advancing AI models.
Competition is intensifying. OpenAI’s flagship product, ChatGPT, is starting to fall behind rival models. As Google gained ground, OpenAI leadership hit the panic button. Altman, speaking to employees in the firm, as reported by the Wall Street Journal, declared a code red emergency, urging them to pause peripheral projects and redirect all efforts toward improving the core chatbot and its underlying models.
Right now, OpenAI risks becoming a victim of its own hype. It has set very high expectations that it may not be able to meet in the timeframe investors require.
Market Patience Wears Thin
OpenAI’s fate hinges on the patience of its backers. So far, investors have been extraordinarily generous, funding losses and infrastructure bets in hopes of future dominance. But patience has limits. The company remains unprofitable, and a recent HSBC analysis indicates that it might not turn a profit well into the next decade.
If growth or technological edge falters, the funding environment will turn hostile. Markets separate winners from losers swiftly after a hype cycle crests.
In 2026, AI data center plans are likely to be scaled back or delayed. The market will demand results: sustainable revenue, technological leadership and a path to profitability. The true winners will be those who capture profitable demand before capital tightens and the bubble deflates.