We think we see the world as it is, but in fact we see it through a thick fog of received knowledge and ideas, some of which are right and some of which are wrong. Like maps, ideas and beliefs shape our experience of the world. The notion that AI is somehow unprecedented, that artificial general intelligence is just around the corner and leads to a singularity beyond which everything is different, is one such map. It has shaped not just technology investment but government policy and economic expectations. But what if it’s wrong?
The best ideas help us see the world more clearly, cutting through the fog of hype. That’s why I was so excited to read Arvind Narayanan and Sayash Kapoor’s essay “AI as Normal Technology.” They make the case that while AI is indeed transformational, it is far from unprecedented. Instead, it is likely to follow much the same patterns as other profound technology revolutions, such as electrification, the automobile, and the internet. That is, the tempo of technological change isn’t set by the pace of innovation but rather by the pace of adoption, which is gated by economic, social, and infrastructure factors, and by the need of humans to adapt to the changes. (In some ways, this idea echoes Stewart Brand’s notion of “pace layers.”)
What Do We Mean by “Normal Technology”?
Arvind Narayanan is a professor of computer science at Princeton who also thinks deeply about the impact of technology on society and the policy issues it raises. He joined me last week on Live with Tim O’Reilly to talk about his ideas. I started out by asking him to explain what he means by “normal technology.” Here’s a shortened version of his reply. (You can watch a more complete video answer and my reply here.)
There is, it turns out, a well-established theory of the way in which technologies are adopted and diffused throughout society. The key thing to keep in mind is that the logic behind the pace of advances in technology capabilities is different from the logic behind the way and the speed in which technology gets adopted. That depends on the rate at which human behavior can change. And organizations can figure out new business models. And I don’t mean the AI companies. There’s too much of a focus on the AI companies in thinking about the future of AI. I’m talking about all the other companies who are going to be deploying AI.
So we present a four-stage framework. The first stage is invention. So this is improvements in model capabilities.…The model capabilities themselves have to be translated into products. That’s the second stage. That’s product development. And we’re still early in the second stage of figuring out what the right abstractions are, through which this very unreliable technology of large language models ([as] one prominent type of AI) can be fit into what we have come to expect from software, which is that it should work very deterministically, which is that users, once they’ve learned how to do something, their expectations will be fulfilled. And when those expectations are violated, we see that AI product launches have gone very horribly.…Stage three is diffusion. It starts with early users figuring out use cases, workflows, risks, how to route around that.…And the last and most time-consuming step is adaptation. So not only do individual users need to adapt; industries as a whole need to adapt. In some cases, laws need to adapt.
We talked a bit about how that has happened in the past, using electrification as one well-known example. The first stage of the Industrial Revolution was powered by coal and steam, in factories with big, centralized power plants. Early attempts at factory electrification didn’t provide all that much advantage. It was only when they realized that electricity made it possible to easily distribute power to small, specialized machines to different factory functions that the second industrial revolution really took off.
Arvind made it real by talking about how AI might change software. It’s not about replacing programmers, he thinks, but about expanding the footprint of software customization.
So some people hope that in the future it becomes possible that just like we can vibe code small apps it becomes possible to build much more complex pieces of enterprise software just based on a prompt. Okay, suppose that’s possible.…I claim that in that world, it will make no sense for these enterprise software companies to build software once and then force thousands of different clients to use it to adjust their workflows to the abstractions defined in the software. That’s not going to be how we’ll use software in this future world.
What will happen is that developers are going to work with each downstream client, understand their requirements, and then perhaps generate software for them on the spot to meet a particular team’s needs or a particular company’s needs, or even perhaps a particular individual’s needs. So this is a complete, very conceptual revision of what enterprise software even means. And this is the kind of thing that we think is going to take decades. And it has little to do with the rate of AI capability improvement.
This is a great example of what I mean by ideas as tools for seeing and responding to the world more effectively. The “normal technology” map will lead investors and entrepreneurs to make different choices than those who follow the “AI singularity” map. Over the long run, those who are guided by the more accurate map will end up building lasting businesses, while the others will end up as casualties of the bubble.
We’ll be talking more deeply about how AI is changing the software industry at our second AI Codecon, coming up on September 9: Coding for the Agentic World.
Physical and Behavioral Constraints on AI Adoption
We also talked a bit about physical constraints (though I have to confess that this was more my focus than his). For example, the flowering of the 20th century automobile economy required the development of better roads, better tires, improvements to brakes, lights, and engines, refinement and distribution networks for gasoline, the reshaping of cities, and far more. We see this today in the bottlenecks around GPUs, around data center construction, around power. All of these things take time to get built.
Arvind’s main focus was on behavioral issues retarding adoption. He gave a great example:
So there’s these “reasoning models.” (Whether they’re actually reasoning is a different question.)…Models like o3, they’re actually very useful. They can do a lot of things that nonreasoning models can’t. And they started to be released around a year ago. And it turns out, based on Sam Altman’s own admission, that in the free tier of ChatGPT, less than 1% of users were using them per day. And in the pay tier, less than 7% of users were using them.…So this shows you how much diffusion lags behind capabilities. It’s exactly an illustration of the point that diffusion—changes to user workflows, learning new skills, those kinds of things—are the real bottleneck.
And of course, the user backlash about the loss of the “personality” of GPT-4 drives this home even more, and raises a whole lot of new uncertainty. I thought Arvind nailed it when he called personality changes “a whole new switching cost.”
It is because AI is a normal technology that Arvind also thinks fears of AI running amok are overblown:
We don’t think the arrival of recursive self-improvement, for instance, if that were to happen, will be an exception to these patterns. We talk a lot about AI safety in the paper. We’re glad that many people are thinking carefully about AI safety. We don’t think it requires any extraordinary steps like pausing AI or banning open source AI or things like that. Safety is amenable to well-understood market and regulatory interventions.
When we say AI as normal technology, it’s not just a prediction about the future. One of the core points of the paper is that we have the agency to shape it as normal technology. We have the agency to ensure that the path through which it diffuses through society is not governed by the logic of the technology itself but rather by humans and institutions.
I agree. Human agency in the face of AI is also one of the deep currents in my book WTF? What’s the Future and Why It’s Up to Us.
AI KPIs and the “Golden Rule”
One of my favorite moments was when one of the attendees asked if a good guide to the KPIs used by AI companies oughtn’t to be what they would want the AI to do for themselves, their children, and their loved ones. This, of course, is not only a version of the Golden Rule, found in many religions and philosophies, but really good practical business advice. My own philosophical mentor Lao Tzu once wrote, “Fail to honor people, they fail to honor you.” And also this: “Losing the way of life, people rely on goodness. Losing goodness, they rely on laws.” (That’s my own loose retranslation of Witter Bynner’s version.) I first thought of the relevance of this quote in the days of my early open source activism. While others were focused on free and open source licenses (laws) as the key to its success, I was interested in figuring out why open source would win just by being better for people—matching “the way of life,” so to speak. Science, not religion.
Why Labor Law, Not Copyright, May Be the Key to AI Justice
In response to an attendee question about AI and copyright, Arvind once again demonstrated his ability to productively reframe the issue:
While my moral sympathies are with the plaintiffs in this case, I don’t think copyright is the right way to bring justice to the authors and photographers and publishers and others who genuinely, I think, have been wronged by these companies using their data without consent or compensation. And the reason for that is that it’s a labor issue. It’s not something that copyright was invented to deal with, and even if a future ruling goes a different way, I think companies will be able to adapt their processes so that they stay clear of copyright law while nonetheless essentially leaving their business model unchanged. And unless you can change their business model, force them to negotiate with these creators—with the little guy, basically—and work out a just compensation agreement, I don’t think justice will be served.
When the screenwriters guild went on strike about AI and won, they showed just how right he is in this reframing. That case has faded from the headlines, but it provides a way forward to a fairer AI economy.
AI and Continuous Learning
We ended with another attendee question, about what kids should learn now to be ready for the future.
We have, in my view, a weird education system. And I’ve said this publicly for as long as I’ve been a professor, this concept that you stay in school for 20 years or whatever, right through the end of college, and then you’re fully trained, and then you go off into the workforce and just use those skills that you once learned.
Obviously, we know that the world doesn’t work like that. And that’s a big part of the reason why the college experience is so miserable for so many students. Because they’d actually rather be doing stuff instead of in this decontextualized environment where they’re supposed to just passively absorb information for using it some day in the future.
So I think AI is an opportunity to fix this deeply broken approach to education. I think kids can start making meaningful contributions to the world, much earlier than they’re expected to.
So that’s one half of the story. You can learn much better when you’re actually motivated to produce something useful. In the second half of the story it’s more true than ever that we should never stop learning.
But it is time to stop my summary! If you are a subscriber, or signed up to watch the episode, you should have access to the full recording here.
AI tools are quickly moving beyond chat UX to sophisticated agent interactions. Our upcoming AI Codecon event, Coding for the Agentic World, will highlight how developers are already using agents to build innovative and effective AI-powered experiences. We hope you’ll join us on September 9 to explore the tools, workflows, and architectures defining the next era of programming. It’s free to attend. Register now to save your seat.