The March 2026 Model Refactoring Update quietly removed Deep Research and Local Data Control from ChatGPT, triggering an 8.4% drop in daily active users and a wave of developer defection that signals something more structural than a product stumble.
There’s a particular kind of frustration that comes not from being ignored, but from watching something you championed get systematically hollowed out. That’s what a growing slice of ChatGPT’s most committed users felt when OpenAI’s Model Refactoring Update landed on March 31, 2026, deprecating the capable GPT-5 Omni in favor of GPT-6 Lite, a model built around API latency and cost efficiency rather than the reasoning depth that made GPT-5 Omni genuinely useful for complex work.
The features axed in the transition weren’t minor. Deep Research and Local Data Control had been positioned by OpenAI itself as flagship capabilities through 2024 and 2025, the kind of tools that converted skeptics into advocates. Removing them without a like-for-like replacement isn’t a technical pivot, it’s a values statement about who OpenAI is actually building for.
CEO Sam Altman addressed the fallout in a leaked town hall memo dated April 2, arguing that the marginal reasoning gains served only the top one percent of users and couldn’t justify the infrastructure overhead. It’s a defensible cost-accounting position. It’s also a remarkable thing to say out loud about the people who built your brand through word-of-mouth, tutorial videos, and enterprise evangelism. Chief Product Officer Kevin Weil has framed the shift around global accessibility, which sounds principled until you notice that advanced capabilities haven’t disappeared so much as migrated behind a $2,500-per-month enterprise subscription wall.
Appfigures and Sensor Tower data show ChatGPT’s mobile daily active users fell 8.4% in the first two weeks of April alone. Brandwatch sentiment analysis found “censored,” “dumber,” and “ruined” spiking as top association keywords, the kind of language that doesn’t come from casual disappointment but from users who felt a specific loss of capability. Meanwhile, GitHub recorded a 40% jump in forks of open-source frameworks like llama.cpp and vLLM as developers moved to self-host older models rather than accept the downgrade. When your power users start building around you instead of with you, the retention problem is already downstream of the real one.
What’s happening here is the acceleration of a fracture that’s been forming for a while. There are now two meaningfully different product categories competing under the AI assistant label: Utility AI, optimized for business workflows, API throughput, and enterprise compliance, and Assistant AI, which is what most people thought they were getting when they signed up for ChatGPT. OpenAI appears to have decided, explicitly or by consequence, that these two categories can no longer live comfortably in the same product tier.
The timing is also worth noting. Anthropic has been the quiet beneficiary of every wave of ChatGPT dissatisfaction since 2023, and the positioning of Claude Opus as the professional alternative is now more credible than it’s ever been. OpenAI built the market for serious AI tool adoption. Whether it retains the users who took that market seriously is a different question entirely.
The broader implication for the industry isn’t about OpenAI specifically. It’s about what happens when consumer AI products reach sufficient scale that the growth math starts favoring advertising revenue and plugin partnerships over depth of capability. That’s a familiar story from social media, search, and productivity software. The lesson each time was that the users who built the platform’s credibility are usually the last ones the business model considers when optimizing for margin. Watch whether OpenAI reverses course on any of the deprecated features before mid-year, and watch whether the developer migration to open-source self-hosting gains enough momentum to become a genuine distribution alternative. Both will be more telling than any press release.
Also read: The Reddit mod meme is the funniest stress test AI safety filters have faced this year • SK Hynix bets $13 billion on a new South Korea chip plant as AI memory demand shows no sign of cooling • Local LLMs are no longer a hobbyist experiment and the cloud AI market should be paying attention