
Apple has been roundly criticized for falling behind in AI, and for its rather disastrous Apple Intelligence launch in which it was forced to admit it had promised things it couldn’t deliver.
The second criticism is valid, the first only partly so. But a new report suggests that Apple’s path to delivering on its AI promises may be a radical one: abandoning work on its own model. I do now think that’s absolutely the right thing to do …
Initially giving Apple a pass
I started out by giving Apple a pass on its AI progress for two reasons. First, the company has very rarely aimed to be first to market, instead aiming to be best. A couple of years ago, I argued that some high-profile disasters with early versions of LLMs demonstrated the wisdom of Apple taking a cautious approach in this case.
Bing told one user they were “wrong, confused, and rude” about what year it was, and demanded an apology. In another chat, it offered a Nazi salute phrase as a suggested response. Kevin Roose of the New York Times published an incredible chat session he had with Bing, in which the chatbot declared that they were in love.
When The Telegraph asked Bing to translate some text, the chatbot demanded to be paid for the work, and supplied a (fictitious) PayPal address for the payment. When computer scientist Marvin von Hagen told Bing that he might have the hacking skills to shutdown the chatbot, and asked a follow-up question, Bing said its own survival was more important than that of a human which might threaten it.
I argued that Apple using speech as the primary interface for AI made it particularly important for the company to be careful.
Second, Apple’s approach to privacy. LLMs rely on huge amounts of human data for their training, and Apple’s determination to ensure that user data is not used for training makes the task particularly challenging. Contrast that with … certain other companies, and the difference is very stark.
But we cannot wait indefinitely
Two years later, however, with Apple users still waiting, I reached the point of deciding that enough was enough. If the company couldn’t give us a smarter Siri anytime soon, it was time for it to allow its customers to vote with their feet and choose to replace Siri with the chatbot of their choice. I argued this would benefit both Apple and its customers.
First, it would relieve it of the time pressure it’s currently under. Since users would be free to choose an alternative for now, it can really take its time with Siri, and launch the service it really wants, once the company is completely happy with its performance.
Second, Apple could boost its own Siri development efforts by gathering a vast amount of data about the chatbot requests iPhone users make of their third-party services. This would be massively valuable in steering the company’s own decisions.
Of course, Apple should ask user permission for this, but I’d certainly be happy to grant that, and I suspect most other iPhone users would too. Helping Apple develop the best possible Siri is in all our interests.
After another six months had gone by with almost nothing in the way of visible progress, I said it was getting harder and harder to believe that Apple can actually deliver.
Apple may take a new path
A recent report suggested that Apple may now be concluding that developing its own model simply doesn’t make sense.
Some Apple leaders hold the view that large language models will become commodities in the years to come and that spending a fortune now on its own models doesn’t make sense.
This had previously been hinted at with a report that Apple would use Google’s Gemini as the backend for many Siri queries.
The custom Gemini model will run on Apple’s Private Cloud Compute servers, to help fulfil user requests. Apple has promised that the new Siri will be able to answer personal questions like ‘find the book recommendation from Mom’ by hunting through data on your device and generating the appropriate response on-the-fly.
I now think this would be best
After previously hoping that Apple simply needed a bit more time, I have now concluded that this approach would offer the best of all possible worlds.
Currently, I have Siri set to automatically fall back to ChatGPT when it can’t help. I do this with Apple’s assurance that my queries will not be used by OpenAI for training purposes in the way they would be if I used the app directly.
The Gemini report likewise says that, although Google’s model will be used, running it on Apple’s PCC servers means that user privacy would be fully protected.
With this approach, we get the best of what the leading AI companies can offer, coupled to Apple’s ironclad privacy guarantees. Given that privacy not performance is Apple’s USP when it comes to artificial intelligence, I can see no great value in Apple persisting in trying to develop its own models given the very slow rate of progress.
My own view now, then, is that Apple should go full steam ahead on using the best available AI models running on its own PCC servers with its own privacy guarantees.
Do you agree? Please share your thoughts in the comments.
FTC: We use income earning auto affiliate links. More.

