
ANKARA, TURKIYE – JULY 27: In this photo illustration, multiple Amazon Web Services (AWS) logos are being displayed on a screen in Ankara, Turkiye on July 27, 2025. (Photo by Binnur Ege Gurun Kocak/Anadolu via Getty Images)
Anadolu via Getty ImagesOpenAI was exclusive to Microsoft Azure for six years. It took less than 24 hours to get on AWS.
On April 27, OpenAI and Microsoft formally ended the exclusive partnership that had defined enterprise AI for the previous six years. OpenAI was now legally free to run its products on any cloud platform.
The next day, on April 28, OpenAI was already live on Amazon Web Services.
Three new products went into limited preview simultaneously: GPT-5.5 and GPT-5.4 on Amazon Bedrock, Codex on Amazon Bedrock with full integration into Codex CLI, the desktop app, and the Visual Studio Code extension, and a new entity called Bedrock Managed Agents powered by OpenAI. AWS made the announcement directly alongside its biggest annual customer event. Matt Garman, the CEO of AWS, shared the stage with OpenAI leadership.
The speed is what tells the story. The legal contract changed on a Monday. The product was live by Tuesday. That kind of timeline does not happen unless both companies had been preparing the integration for months, and the legal change was the only thing holding it up.
This is not a routine partnership announcement. It is the operational consequence of the AI industry quietly becoming multi-cloud, and it tells you something specific about how IPO timing pressure is now driving the strategic decisions of every major lab. The exclusive era of AI is over. The phase that replaces it is already structurally different in ways most coverage has not registered.
What AWS Customers Now Get
The three products launched together change how enterprise AI gets deployed. Each one removes a friction point that has constrained how AWS customers could use OpenAI before today.
OpenAI’s frontier models, including GPT-5.5 and GPT-5.4, are now accessible through the same Bedrock APIs that millions of organizations already use to access models from Anthropic, Meta, Mistral, Cohere, and Amazon itself. Customers do not need to configure new infrastructure or learn a different security model. OpenAI usage inherits AWS’s existing IAM access management, PrivateLink connectivity, encryption, CloudTrail logging, and compliance frameworks. Most importantly for the enterprise buyer, OpenAI usage applies directly toward existing AWS cloud commitments. A company with a $50 million AWS commit can now spend that commit on OpenAI inference rather than negotiating a separate OpenAI contract.
Codex deserves its own analysis. OpenAI’s coding agent now reaches over four million weekly users. Bringing it to Bedrock means AWS customers can run Codex using their AWS credentials, process inference through Bedrock, and apply Codex usage to their AWS commit. Anthropic’s Claude Code has been the dominant agentic coding tool inside AWS-heavy enterprises until now. With Codex available on Bedrock alongside Claude, the coding agent fight just moved into the same buying window for the same customer.
Bedrock Managed Agents is the most interesting product of the three. AWS describes it as combining OpenAI’s frontier models and agent harness with AWS infrastructure to deploy production-ready agents quickly. Each agent has its own identity, logs every action, and runs inside the customer’s environment. This is OpenAI’s agent technology, packaged for the parts of the enterprise where compliance, audit trails, and identity management matter more than speed.
What ties all three together is the financial mechanism. AWS commits are how Fortune 500 companies actually budget for cloud spending. They negotiate a multi-year commitment with AWS, then allocate that commitment across services. Until April 28, the only way a customer could spend AWS commits on OpenAI was indirectly. Now they spend it directly. That single change is what flips OpenAI from “another vendor” into “another line item in the existing AWS budget” for thousands of enterprise customers.
The Mirror Move At Microsoft
The story that almost no coverage has connected is what Microsoft is doing on the other side of the same week.
Microsoft did not just lose OpenAI exclusivity. Microsoft simultaneously deepened its Anthropic relationship. Microsoft 365 Copilot is increasingly defaulting to Claude alongside OpenAI models. Microsoft signed a separate $9.7 billion compute deal with Anthropic the same week the OpenAI exclusivity ended. Microsoft is also reportedly building a new agent offering powered by Claude.
The symmetry is the point. Both clouds now host both labs. AWS has Anthropic (which it invested $33 billion in last month) and OpenAI. Microsoft has OpenAI (in which it retains a 27% stake) and Anthropic. The era when a customer could be locked into one cloud-AI pairing has ended.
What this means for the structure of the AI market is significant:
The cloud platforms have stopped acting as exclusive distribution partners for AI labs and are now acting as pure distribution layersThe AI labs have stopped competing for cloud exclusivity and are now competing for end-customer mindshare across all cloudsThe economic value at the cloud layer is shifting from exclusive partnerships to AWS-style “best model for every use case” model marketplacesCustomers benefit because they can pick the best model for each workload without changing cloud providers
That is a fundamentally different competitive dynamic than what existed two weeks ago. The AI labs are no longer auctioning loyalty to cloud platforms. They are competing for shelf space inside multi-vendor model marketplaces, where the cloud platform is the storefront and they are one product among many.
Why The Speed Matters
The single most informative detail of the entire announcement is the 24-hour gap between contract signing and product launch.
Contracts that take a week to sign do not produce production-ready integrations the next day. The integration work, the security review, the API mapping, the legal compliance audits, and the documentation all take months. The fact that OpenAI was live on Bedrock by April 28 means the engineering work had been finished for weeks or months prior, sitting behind a contractual gate that was waiting to open.
This tells you something specific about IPO timing pressure. OpenAI’s IPO is reportedly being prepared for a 2026 listing. Anthropic’s IPO is targeted for late 2026 or early 2027. Both labs need to demonstrate enterprise deployment scale before going public. A lab that is exclusive to one cloud has a smaller addressable enterprise market than a lab available across all three. Both labs needed multi-cloud distribution before their IPOs, and both lab and both clouds had been preparing the integration for the moment the contracts permitted it.
The Microsoft side of the same dynamic is symmetrical. Microsoft would not have invested $9.7 billion in a new Anthropic compute deal without months of preparation. The exclusivity end and the Anthropic deal happening in the same week is not coincidence. They are paired strategic moves that both companies needed in place before either lab went public.
Two consequences flow from this:
The AI lab IPO window is now structurally clearer. Both labs have multi-cloud distribution. Both have enterprise scale visible in their commit pipelines. The remaining question is valuation, not feasibility.The cloud platforms now have to compete on tools rather than exclusivity. Bedrock, Azure AI Foundry, and Google Cloud Vertex AI are all becoming the same kind of product: model-agnostic marketplaces where the cloud’s value is the integration, the security, the billing, and the developer experience, not the exclusive AI lab they host.What This Tells Investors
Three observations worth taking from this announcement, especially for anyone trying to read where AI capital is moving over the next twelve months.
The first is that enterprise AI is no longer a model-choice problem. It is a deployment-and-governance problem. The customers AWS is trying to win with OpenAI on Bedrock are not the ones who care about which model is marginally better on benchmarks. They are the ones who care about whether the model can be deployed inside their existing identity, security, and compliance infrastructure. Whichever cloud makes that easier across the most models wins. This is why AWS’s Bedrock is positioned as a model marketplace rather than an OpenAI partnership specifically.
The second is that the AI labs are converging on a similar strategic posture. Anthropic and OpenAI both want to be available everywhere their customers already are. Both want to maximize the addressable market before their IPOs. Both are willing to accept that the cloud platforms get to control the customer relationship in exchange for distribution. That is a different competitive dynamic than the “OpenAI plus Microsoft versus Anthropic plus AWS” framing that dominated coverage in 2024 and 2025.
The third is that the actual strategic moat in AI is becoming the model itself, not the cloud distribution. If both labs are available on both clouds, neither cloud can use AI exclusivity as a differentiator. What matters is which model performs best on each customer’s specific workloads, which has the better cost-per-token economics, and which has the better tooling around it. That is a contest the labs win or lose on their own merits, not based on which cloud they happen to live inside.
The companies positioned to benefit from this dynamic are the ones building tools, agents, and workflows that work across multiple AI models on multiple clouds. The companies most exposed to it are the ones whose entire competitive position depended on exclusive partnerships that no longer exist.
The April 28 announcement was the first visible operational consequence of the Microsoft-OpenAI exclusivity ending. It will not be the last. The next twelve months will see most major AI products available across most major clouds, the cloud platforms positioning their model marketplaces as the differentiated layer, and the AI labs competing for shelf space and developer mindshare across the entire enterprise market. That is a more open competitive landscape than what came before, and it is moving faster than most coverage has registered.
The era of one cloud, one model is over. The phase that replaces it has been visible in legal filings since April 27, in product launches since April 28, and is going to define the competitive structure of enterprise AI for the rest of the decade.