Bosses throughout the world love the idea of using AI to replace employees. They can talk all they want about how much more efficient everyone will be with AI, but the truth is if they can fire staffers, their bottom line looks better, their stock price goes up, and the CEO makes a ton more money.
It’s a win-win if your title starts with a C or you’re a stockholder.
Companies deny they’re doing this, of course. Take Microsoft, for example. CEO Satya Nadella claims AI tools like GitHub Copilot now write up to 30 percent of Microsoft’s software code. Simultaneously, Microsoft has laid off over 15,000 people, nearly 7% of its workforce. Coincidence? I think not.
The funds Microsoft is saving from all its ex-staffers are helping to pay for Microsoft’s spending $75 to $80 billion on its AI CapEx this year.
There’s only one little problem with all this. This presumes that 1) AI can successfully get work done and 2) AI will keep being cheaper.
As for the first, of course, AI can replace some workers. Call center help staff? Maybe. That may not save as much money as company executives think. We’ve been shipping call center jobs offshore for decades. This is another step in a long-established work pattern. There’s nothing new here.
The real savings, though, is to get rid of developers, engineers, designers – you know, people like you and me. Once you’ve separated the truth from the chaff of AI hype-spam, the evidence that AI can really deliver value becomes much less clear.
I find it telling that, according to the 2025 Stack Overflow Developer Survey, 84 percent of programmers now use or plan to use AI tools in their workflows, but 46 percent of AI-using developers don’t trust their results. And, even more interestingly as AI developer tools have been “improving,” programmers are trusting them less than ever.
Why? Because instead of writing code, they’re spending – wasting? – a ton of time fixing AI coding blunders. This is not a productive use of mid-level, never mind senior, programmers.
Take AI’s latest and greatest release: GPT-5. OpenAI’s CEO Sam Altman calls GPT-5 “the best model in the world.” Funny. GPT-5 will confidently tell you that Willian H. Brusen is a former US president. For those of you not from the States, there’s no such person, never mind a former president. Serious GPT users were so thrilled with GPT-5 that they demanded, and got, OpenAI to bring back the older, but more reliable, GPT4o model.
The verdict of some users is in. They hate GPT-5.
Let’s make this even scarier. Suppose that the current state of AI is as good as it gets? That’s not just me being cynical. Both a recent Apple and an Arizona State University study indicate that our current ways of improving LLMs have gone as far as they can go. Sure, you can increase the tokens and throw more GPUs at training, but to quote from the Arizona State paper, “Our results reveal that CoT [Chain of Thought] reasoning is a brittle mirage that vanishes when it is pushed beyond training distributions.”
Still, AI is good enough to economically replace knowledge workers, isn’t it? Isn’t it? Nope.
First, despite all the AI leaders’ marketing-slop, as the Economist recently pointed out, only 10 percent of firms are using AI in a meaningful way. AI, in short, is not as big a deal as the insane stock market would have you believe.
Moreover, and this is the real killer, customers are not paying anything like AI’s real cost. Today, every AI company is selling you AI at loss-leader pricing. As writer Ewa Szyszka for the Kilo Code blog observed, people have been assuming that since “the raw inference costs were coming down fast, the applications’ inference costs would come down fast as well but this assumption was wrong.”
What that means is that today’s more advanced “models can require over 100x compute for challenging queries compared to traditional single-pass inference.” Compute isn’t cheap. So AI-enabled code editors, such as Cursor and Claude Code, are replacing their introductory $20 a month plans with $200 a month plans. All this as vibe coding’s reputation continues to circle the drain.
Oh, and those plans with the tempting low-prices? They usually come with token limitations that make them far less powerful than those you get at a higher tier.
Of course, Sam Altman, OpenAI’s CEO, can predict “The cost to use a given level of AI falls about 10x every 12 months,” but I believe that just as much as I would an Old West snake oil huckster who’d guaranteed me his miracle elixir would grow my hair back.
The AI developer analysis company DX has pointed out that “The real cost of implementing AI tools across engineering organizations often runs double or triple the initial estimates, and sometimes more.”
As Laura Tacho, CTO of DX, puts it: “We were just having a conversation about how many tools each of us personally are using on a daily basis, those are all like 20 euros a month or 20 bucks a month. When you scale that across an organization, this is not cheap. It’s not cheap at all.” For example, Justin Reock, DX Deputy CTO, said.” A single engineer might use GitHub Copilot for code completion, ChatGPT for brainstorming, and Claude for documentation, resulting in overlapping costs without centralized visibility.” It all adds up.
So, what happens on the day that OpenAI, with a burn rate of $8 billion a year, and Anthropic, with a $3 billion burn rate, must make an honest-to-goodness profit? Good question. Smarter financial people than me, which wouldn’t take much, call OpenAI’s path to profitability “an open question.” Anthropic faces the same doubts.
Other companies, like Microsoft and Google, are sneaking AI charges into your existing software-as-a-service (SaaS) bills. As Saas management company Zylo pointed out, “AI tools embedded in existing SaaS platforms can be deceptively expensive. For example, Microsoft Copilot now adds up to $30 per user per month to Microsoft 365 subscriptions, while Google has increased Workspace prices but bundled … That makes it difficult to compare options side by side — and even harder to calculate the total cost of ownership.”
My best guesstimate is that, at a minimum, you can expect to pay ten to fifteen times more for real AI work you’re having done today to achieve the same results in 2026. Switching over to AI suddenly doesn’t look like that much of a bargain, does it? ®